Posted January 23, 2024

Last modified April 5, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Monday, April 8, 2024 Zoom (click here to join)

Posted November 17, 2003

Control and Optimization Seminar Questions or comments?

3:00 pm James E. Keisler Lounge (Room 321 Lockett)
Stanislav Zabic, Louisiana State University Department of Mathematics
Graduate Student

Optimizing the Design of the Michelin PAX Tire System

Abstract: This talk analyzes a problem encountered by the Michelin Corporation in the design of a \'run-flat\', or PAX, tire system. A PAX tire system consists of an aluminum wheel of larger-than-conventional radius, a low-profile tire, and a special rubber support ring inside and concentric with the tire. The goal of the support ring is to provide a safe driving transition in case of a flat tire. After the air has deflated from the tire, the support ring carries the entire load of the car. We will discuss ways to optimize the design of the support ring. This research was carried out during the summer of 2001, while the speaker was a visitor at North Carolina State University.

Posted September 14, 2003

Last modified May 3, 2010

Control and Optimization Seminar Questions or comments?

3:00 pm 381 Lockett Hall
Michael Malisoff, LSU
Roy P. Daniels Professor

Lyapunov Functions and Viscosity Solutions, Part 1

Posted September 14, 2003

Last modified May 3, 2010

Control and Optimization Seminar Questions or comments?

3:00 pm 381 Lockett Hall
Michael Malisoff, LSU
Roy P. Daniels Professor

Lyapunov Functions and Viscosity Solutions, Part 2

Posted September 14, 2003

Last modified May 3, 2010

Control and Optimization Seminar Questions or comments?

3:00 pm 381 Lockett Hall
Michael Malisoff, LSU
Roy P. Daniels Professor

Lyapunov Functions and Viscosity Solutions, Part 3

Posted March 25, 2004

Last modified March 26, 2004

Control and Optimization Seminar Questions or comments?

3:40 pm 381 Lockett Hall
Vinicio Rios, LSU Department of Mathematics
PhD Student

A Theorem on Lipschitzian Approximation of Differential Inclusions

Posted September 19, 2003

Last modified March 2, 2021

Control and Optimization Seminar Questions or comments?

3:30 pm 381 Lockett Hall
Peter Wolenski, LSU Department of Mathematics
Russell B. Long Professor

Clarke's New Necessary Conditions in Dynamic Optimization

Posted August 26, 2003

Control and Optimization Seminar Questions or comments?

3:40 pm – 4:30 pm Lockett 277
Jesus Pascal, Universidad del Zulia, Venezuela

Free Boundary Control Problem

Posted September 24, 2003

Last modified January 10, 2022

Control and Optimization Seminar Questions or comments?

2:30 pm 240 Lockett Hall
Yuan Wang, Florida Atlantic University

A Relaxation Theorem for Differential Inclusions with Applications to Stability Properties

The fundamental Filippov-Ważewski Relaxation Theorem states that the solution set of an initial value problem for a locally Lipschitz differential inclusion is dense in the solution set of the same initial value problem for the corresponding relaxation inclusion on compact intervals. In this talk, I will discuss a complementary result which says that the approximation can be carried out over non-compact or infinite intervals provided one does not insist on the same initial values. To illustrate the motivations for studying such approximation results, I will briefly discuss some quick applications of the result to various stability and uniform stability properties.

Visit supported by Visiting Experts Program in Mathematics, Louisiana
Board of Regents. LEQSF(2002-04)-ENH-TR-13

Posted November 18, 2003

Last modified November 24, 2003

Control and Optimization Seminar Questions or comments?

2:30 pm 240 Lockett Hall
Tzanko Donchev, University of Architecture and Civil Engineering, BULGARIA

Singular Perturbations in Infinite Dimensional Control Systems

Abstract: We consider a singularly perturbed control system involving differential inclusions in Banach spaces with slow and fast solutions. Using the averaging approach, we obtain sufficient conditions for the Hausdorff convergence of the set of slow solutions in the sup norm. We present applications of the theorem to prove convergence of the fast solutions in terms of invariant measures and convergence of equi-Lipschitz solutions. We also present some illustrative examples.

Posted March 3, 2004

Control and Optimization Seminar Questions or comments?

2:30 pm 240 Lockett Hall
Zhijun Cai, Department of Mechanical Engineering, LSU
PhD Candidate

Adaptive Stabilization of Parametric Strict-Feedback Systems with Additive Disturbance

Abstract: This talk deals with the output regulation of uncertain, nonlinear, parametric strict-feedback systems in the presence of additive disturbance. A new continuous adaptive control law is proposed using a modified integrator backstepping design that ensures the output is asymptotically regulated to zero. Despite the disturbance, the adaptation law does not need the standard robustifying term (e.g., sigma-modification or e1-modification) to ensure the aforementioned stability result. A numerical example illustrates the main result.

Posted February 15, 2004

Last modified March 1, 2004

Control and Optimization Seminar Questions or comments?

2:30 pm Lockett Hall, Room 240
Frederic Mazenc, Institut National de Recherche en Informatique et en Automatique, FRANCE

Stabilization of Nonlinear Systems with Delay in the Input

Abstract: We present three results on the problem of globally uniformly and locally exponentially stabilizing nonlinear systems with delay in the input through differentiable bounded feedbacks: 1) We solve the problem for chains of integrators of arbitrary length. No limitation on the size of the delay is imposed. An exact knowledge of the delay is not required. 2) We solve the problem for an oscillator with an arbitrary large delay in the input. A first solution follows from a general result on the global stabilization of null controllable linear systems with delay in the input by bounded control laws with a distributed term. Next, it is shown through a Lyapunov analysis that the stabilization can be achieved as well when the distributed terms are neglected.
It turns out that this main result is intimately related to the output feedback stabilization problem. 3) We solve the problem for a family of nonlinear feedforward systems when there is a delay in the input. No limitation on the size of the delay is imposed. An exact knowledge of the delay is not required.

This visit is supported by the Visiting Experts Program in Mathematics, Louisiana Board of Regents Grant LEQSF(2002-04)-ENH-TR-13.

Posted August 27, 2004

Control and Optimization Seminar Questions or comments?

3:00 pm 381 Lockett Hall
Stanislav Zabic, Louisiana State University Department of Mathematics
Graduate Student

Impulsive Systems

Posted September 3, 2004

Last modified September 10, 2004

Control and Optimization Seminar Questions or comments?

3:00 pm 381 Lockett Hall(Originally scheduled for Wednesday, September 8, 2004, 3:00 pm)

Stanislav Zabic, Louisiana State University Department of Mathematics
Graduate Student

Impulsive Systems, Part II

Posted September 20, 2004

Control and Optimization Seminar Questions or comments?

3:00 pm 381 Lockett Hall
Norma Ortiz, Mathematics Department, LSU
PhD Student

An Existence Theorem for the Neutral Problem of Bolza

Posted September 21, 2004

Control and Optimization Seminar Questions or comments?

3:10 pm – 4:00 pm Lockett 381
Norma Ortiz, Mathematics Department, LSU
PhD Student

An existence theorem for the neutral problem of Bolza, Part II

Posted September 29, 2004

Last modified October 1, 2004

Control and Optimization Seminar Questions or comments?

3:10 pm – 4:00 pm 381 Lockett Hall
Vinicio Rios, LSU Department of Mathematics
PhD Student

Strong Invariance for Dissipative Lipschitz Dynamics

Posted October 6, 2004

Control and Optimization Seminar Questions or comments?

3:10 pm – 4:00 pm 381 Lockett Hall
Vinicio Rios, LSU Department of Mathematics
PhD Student

Strong Invariance for Dissipative Lipschitz Dynamics, Part II

Posted October 13, 2004

Last modified October 14, 2004

Control and Optimization Seminar Questions or comments?

3:10 pm – 4:00 pm 381 Lockett Hall
George Cazacu, LSU Department of Mathematics
PhD student

A characterization of stability for dynamical polysystems via Lyapunov functions

Posted October 20, 2004

Control and Optimization Seminar Questions or comments?

3:10 pm – 4:00 pm 381 Lockett Hall
George Cazacu, LSU Department of Mathematics
PhD student

Closed relations and Lyapunov functions for polysystems

Posted November 3, 2004

Last modified November 17, 2004

Control and Optimization Seminar Questions or comments?

3:10 pm – 4:00 pm 381 Lockett Hall
Peter Wolenski, LSU Department of Mathematics
Russell B. Long Professor

Introduction to control Lyapunov functions and feedback

Posted November 25, 2004

Control and Optimization Seminar Questions or comments?

3:10 pm – 4:00 pm 381 Lockett Hall
Peter Wolenski, LSU Department of Mathematics
Russell B. Long Professor

Introduction to control Lyapunov functions and feedback, Part II

Posted February 21, 2005

Last modified February 25, 2005

Control and Optimization Seminar Questions or comments?

3:30 pm 2150 CEBA
Michael Malisoff, LSU
Roy P. Daniels Professor

An Introduction to Input-to-State Stability

Posted March 8, 2005

Last modified March 9, 2005

Control and Optimization Seminar Questions or comments?

3:30 pm – 4:30 pm 2150 CEBA
Rafal Goebel, University of California, Santa Barbara

Hybrid dynamical systems: solution concepts, graphical convergence, and robust stability

Hybrid dynamical systems, that is systems in which some variables evolve continuously while other variables may jump, are an active area of research in control engineering. Basic examples of such systems include a bouncing ball (where the velocity \"jumps\" every time the ball hits the ground) and a room with a thermostat (where the temperature changes continuously while the heater is either \"on\" or \"off\"), much more elaborate cases are studied for example in robotics and automobile design.

The talk will present some challenges encountered on the way to a successful stability theory of hybrid systems, and propose a way to overcome them. In particular, we will motivate the use

of generalized time domains, show how the nonclassical notion of graphical convergence appears to be the correct concept to treat sequences of solutions to hybrid systems, and how various other tools of set-valued and nonsmooth analysis may and need to be used.

Posted March 29, 2005

Last modified March 2, 2021

Control and Optimization Seminar Questions or comments?

2:00 pm – 3:00 pm Lockett 381
Vladimir Gaitsgory, School of Mathematics and Statistics, University of South Australia

TBA

Posted March 29, 2005

Last modified March 30, 2005

Control and Optimization Seminar Questions or comments?

3:30 pm – 4:30 pm CEBA 2150
Vladimir Gaitsgory, School of Mathematics and Statistics, University of South Australia

Limits of Occupational Measures and Averaging of Singularly Perturbed

Posted April 11, 2005

Control and Optimization Seminar Questions or comments?

3:40 pm – 4:40 pm Lockett 381
Jesus Pascal, Universidad del Zulia, Venezuela

On the Hamilton Jacobi Bellman Equation for a Deterministic Optimal Control Problem

Posted April 15, 2005

Control and Optimization Seminar Questions or comments?

3:30 pm – 4:30 pm CEBA 2150
Steven Hall, Louisiana State University, Department of Biological and Agricultural Engineering

Challenges in Measurement and Control with Biological Systems

Posted June 17, 2005

Control and Optimization Seminar Questions or comments?

10:30 am EE117
Li Qiu, Hong Kong University of Science and Technology

Perturbation Analysis beyond Singular Values -- A Metric Geometry on the Grassmann Manifold

Posted July 15, 2005

Control and Optimization Seminar Questions or comments?

10:00 am EE 117
Boumediene Hamzi, University of California, Davis

The Controlled Center Dynamics

Posted January 30, 2006

Last modified February 16, 2006

Control and Optimization Seminar Questions or comments?

10:00 am EE 117
Patrick De Leenheer, Department of Mathematics, University of Florida

Bistability and Oscillations in the Feedback-Controlled Chemostat

The chemostat is a biological reactor used to study the dynamics of species competing for nutrients. If there are n>1 competitors and a single nutrient, then at most one species survives, provided the control variables of the reactor are constant. This result is known as the competitive exclusion principle. I will review what happens if one of the control variables--the dilution rate--is treated as a feedback variable. Several species can coexist for appropriate choices of the feedback. Also, the dynamical behavior can be more complicated, exhibiting oscillations or bistability.

Posted April 19, 2006

Last modified February 2, 2022

Control and Optimization Seminar Questions or comments?

3:40 pm 381 Lockett
Franco Rampazzo, Dipartimento di Matematica Pura ed Applicata, Università degli Studi di Padova

Moving Constraints as Controls in Classical Mechanics

In most applications of control theory to mechanics the *control* is identified with a force, or with a torque. However, in some concrete situations, the forces are in fact unknown, whereas what one is actually controlling is the *position of part of the system*. More precisely, if the state space consists of the product $\mathcal{Q} \times \mathcal{C}$ of two manifolds $\mathcal{Q}$ and $\mathcal{C}$, one can regard $\mathcal{Q}$ as the actual (reduced) state space by identifying $\mathcal{C}$ with a set of controls. As an example, one can think of a mathematical pendulum whose pivot is constrained on a vertical line. In this case $\mathcal{Q} = S^1$ and $\mathcal{C} = \mathbf{R}$. (The title of the talk refers to the fact that a control function $\mathbf{c}(·)$ defined on a time-interval $I$ can be considered as a time dependent
(i.e., *moving*) state-constraint acting on the original state space $\mathcal{Q} \times \mathcal{C}$.)

To begin with, we will illustrate some remarkable geometric aspects, which
involve, in particular, the metric induced by the kinetic energy on the manifold
$\mathcal{Q} \times \mathcal{C}$ and its relation with the foliation $\{\mathcal{Q} \times \{\mathbf{c}\} \,|\, \mathbf{c} \in \mathcal{C} \}$ .

Secondly, we will address the question of the closure of the set of solutions
for unbounded control systems, and we will see how this issue is connected with
our mechanical problems.

Finally, we will show how some well-known mechanical questions—including
the vibrational stabilization of the so-called *inverted pendulum*—can actually be regarded as instances of problems involving moving constraints as controls.

Professor Rampazzo's visit is sponsored by the Louisiana Board of Regents Grant
"Enhancing Control Theory at LSU".

Posted September 18, 2006

Last modified September 17, 2021

Control and Optimization Seminar Questions or comments?

3:30 pm 285 Lockett
Martin Hjortso, Louisiana State University
Chevron Professor of ChemE

Some Problems in Population Balance Modeling

Posted January 30, 2007

Last modified January 31, 2007

Control and Optimization Seminar Questions or comments?

11:30 am – 12:30 pm Lockett 301D (Conference Room)
Michael Malisoff, LSU
Roy P. Daniels Professor

On the Stability of Periodic Solutions in the Perturbed Chemostat

We study the chemostat model for one species competing for one nutrient using a Lyapunov-type analysis. We design the dilution rate function so that all solutions of the chemostat converge to a prescribed periodic solution. In terms of chemostat biology, this means that no matter what positive initial levels for the species concentration and nutrient are selected, the long-term species concentration and substrate levels closely approximate a prescribed oscillatory behavior. This is significant because it reproduces the realistic ecological

situation where the species and substrate concentrations oscillate. We show that the stability is maintained when the model is augmented by additional species that are being driven to extinction. We also give an input-to-state stability result for the chemostat-tracking equations for cases where there are small perturbations acting on the dilution rate and initial concentration. This means that the long-term species concentration and substrate behavior enjoys a

highly desirable robustness property, since it continues to approximate the prescribed oscillation up to a small error when there are small unexpected changes in the dilution rate function. This talk is based on the speaker\'s joint work with Frederic Mazenc and Patrick De Leenheer.

Posted February 4, 2007

Control and Optimization Seminar Questions or comments?

11:30 am – 12:30 pm 239 Lockett
Michael Malisoff, LSU
Roy P. Daniels Professor

Further Results on Lyapunov Functions for Slowly Time-Varying Systems

We provide general methods for explicitly constructing strict Lyapunov functions for fully nonlinear slowly time-varying systems. Our results apply to cases where the given dynamics and corresponding frozen dynamics are not necessarily exponentially stable. This complements our previous Lyapunov function constructions for rapidly time-varying dynamics. We also explicitly construct input-to-state stable Lyapunov functions for slowly time-varying control systems. We illustrate our findings by constructing explicit Lyapunov functions for a pendulum model, an example from identification theory, and a perturbed friction model. This talk is based on the speaker\'s joint work with Frederic Mazenc.

Posted February 9, 2007

Control and Optimization Seminar Questions or comments?

11:40 am – 12:30 pm Lockett 239
Jimmie Lawson, Mathematics Department, LSU

The Symplectic Group and Semigroup and Riccati Differential

Abstract: We develop close connections between important control-theoretic matrix Riccati differential equation and the symplectic matrix group and its symplectic subsemigroup. We use this example as a case study to demonstrate how the Lie theory of the subsemigroups of a matrix group can be applied to problems in geometric control theory. As an application we derive from this viewpoint the existence of a solution for the Riccati equation for all $t\\geq 0$ under quite general hypotheses.

Posted February 22, 2007

Control and Optimization Seminar Questions or comments?

11:40 am – 12:30 pm Lockett 239
Jimmie Lawson, Mathematics Department, LSU

The Symplectic Group and Semigroup and Riccati Differential (Part II)

Abstract: We develop close connections between important control-theoretic matrix Riccati differential equation and the symplectic matrix group and its symplectic subsemigroup. We use this example as a case study to demonstrate how the Lie theory of the subsemigroups of a matrix group can be applied to problems in geometric control theory. As an application we derive from this viewpoint the existence of a solution for the Riccati equation for all $t\\geq 0$ under quite general hypotheses.

Posted March 5, 2007

Control and Optimization Seminar Questions or comments?

11:40 am – 12:30 pm Lockett 239
Jimmie Lawson, Mathematics Department, LSU

The Symplectic Group and Semigroup and Riccati Differential Equations (Part III)

Abstract: We develop close connections between important control-theoretic matrix Riccati differential equation and the symplectic matrix group and its symplectic subsemigroup. We use this example as a case study to demonstrate how the Lie theory of the subsemigroups of a matrix group can be applied to problems in geometric control theory. As an application we derive from this viewpoint the existence of a solution for the Riccati equation for all $t\\geq 0$ under quite general hypotheses.

Posted March 26, 2007

Control and Optimization Seminar Questions or comments?

11:40 am – 12:30 pm 239 Lockett
Feng Gao, LSU Department of Mechanical Engineering

A Generalized Approach for the Control of MEM Relays

Abstract: We show that voltage-controlled, electrostatic and electromagnetic micro-relays have a common dynamic structure. As a result, both types of microelectromechanical (MEM) relays are subject to the nonlinear phenomenon known as pull-in, which is usually associated with the electrostatic case. We show that open-loop control of MEM relays naturally leads to pull-in during the relay closing. Two control schemes - a Lyapunov design and a feedback linearization design - are presented with the objectives of avoiding pull-in during the micro-relay closing and improving the transient response during the micro-relay opening. Simulations illustrate the performance of the two control schemes in comparison to the typical open-loop operation of the MEM relay.

Posted April 16, 2007

Control and Optimization Seminar Questions or comments?

11:40 am – 12:30 pm Room 239 Lockett
Peter Wolenski, LSU Department of Mathematics
Russell B. Long Professor

The role of convexity in optimization and control theory.

Abstract: This talk will broadly survey the role of convexity in optimization theory, and outline its special place in optimal control. Roughly speaking, convexity plays the role in optimization analogous to that enjoyed by linearity in dynamical system theory. We shall illustrate this by discussing the features of local vs. global statements, generalized differentiation, duality, and representation formulas.

Posted April 23, 2007

Control and Optimization Seminar Questions or comments?

11:40 am – 12:30 pm Room 239 Lockett
Peter Wolenski, LSU Department of Mathematics
Russell B. Long Professor

The role of convexity in optimization and control theory (Part II)

Abstract: This talk will broadly survey the role of convexity in optimization theory, and outline its special place in optimal control. Roughly speaking, convexity plays the role in optimization analogous to that enjoyed by linearity in dynamical system theory. We shall illustrate this by discussing the features of local vs. global statements, generalized differentiation, duality, and representation formulas.

Posted May 1, 2007

Control and Optimization Seminar Questions or comments?

11:40 am – 12:30 pm Room 239 Lockett
Peter Wolenski, LSU Department of Mathematics
Russell B. Long Professor

The role of convexity in optimization and control theory (Part III)

Abstract: This talk will broadly survey the role of convexity in optimization theory, and outline its special place in optimal control. Roughly speaking, convexity plays the role in optimization analogous to that enjoyed by linearity in dynamical system theory. We shall illustrate this by discussing the features of local vs. global statements, generalized differentiation, duality, and representation formulas.

Posted September 9, 2007

Control and Optimization Seminar Questions or comments?

2:30 pm – 3:30 pm Prescott 205
Alvaro Guevara, Dept of Mathematics, LSU

Student Seminar on Control Theory and Optimization

Introduction to Convex Analysis II

Posted June 28, 2009

Last modified May 8, 2021

Control and Optimization Seminar Questions or comments?

10:00 am Lockett 301D (Conference Room)
Michael Malisoff, LSU
Roy P. Daniels Professor

Strict Lyapunov Function Constructions under LaSalle Conditions with an Application to Lotka-Volterra Systems

This informal seminar is by special request of Guillermo Ferreyra and is open to all faculty and graduate students. See abstract and related papers and slides.

Posted October 9, 2009

Last modified May 8, 2021

Control and Optimization Seminar Questions or comments?

10:00 am 117 Electrical Engineering Building
Michael Malisoff, LSU
Roy P. Daniels Professor

Constructions of Strict Lyapunov Functions: An Overview

Information on ECE Seminar Web Site.

Posted January 28, 2010

Last modified April 22, 2010

Control and Optimization Seminar Questions or comments?

3:00 pm 117 Electrical Engineering
Michael Malisoff, LSU
Roy P. Daniels Professor

New Lyapunov Function Methods for Adaptive and Time-Delayed Systems

Lyapunov functions are an important tool in nonlinear control systems theory. This talk presents new Lyapunov-based adaptive tracking control results for nonlinear systems in feedback form with multiple inputs and unknown high-frequency control gains. Our adaptive controllers yield uniform global asymptotic stability for the error dynamics, which implies parameter estimation and tracking for the original systems. We demonstrate our work using a tracking problem for a brushless DC motor turning a mechanical load. Then we present a new class of dilution rate feedback controllers for two-species chemostat models with Haldane uptake functions where the species concentrations are measured with an unknown time delay. This work is joint with Marcio de Queiroz and Frederic Mazenc.

Posted September 21, 2015

Last modified September 22, 2015

Control and Optimization Seminar Questions or comments?

12:30 pm – 1:30 pm 381 Lockett Hall
Michael Malisoff, LSU
Roy P. Daniels Professor

Control of Neuromuscular Electrical Stimulation: A Case Study of Predictor Control under State Constraints

We present a new tracking controller for neuromuscular electrical stimulation, which is an emerging technology that artificially stimulates skeletal muscles to help restore functionality to human limbs. The novelty of our work is that we prove that the tracking error globally asymptotically and locally exponentially converges to zero for any positive input delay, coupled with our ability to satisfy a state constraint imposed by the physical system. Also, our controller only requires sampled measurements of the states instead of continuous measurements and allows perturbed sampling schedules, which can be important for practical purposes. Our work is based on a new method for constructing predictor maps for a large class of time-varying systems, which is of independent interest. See http://dx.doi.org/10.1002/rnc.3211.

Posted September 28, 2015

Control and Optimization Seminar Questions or comments?

12:30 pm – 1:30 pm Room 284 Lockett Hall
Cristopher Hermosilla, Universidad Técnica Federico Santa María

On the Construction of Continuous Suboptimal Feedback Laws

An important issue in optimal control is that optimal feedback laws (the minimizers) are usually discontinuous functions on the state, which yields to ill-posed closed loop systems and robustness problems. In this talk we show a procedure for the construction of a continuous suboptimal feedback law that allows overcoming the aforesaid problems. The construction we exhibit depends exclusively on the initial data that could be obtained from the optimal feedback. This is a joint work with Fabio Ancona (Universita degli Studi di Padova, Italy)

Posted October 5, 2015

Last modified October 8, 2015

Control and Optimization Seminar Questions or comments?

12:30 pm – 1:30 pm Room 284 Lockett Hall
Hugo Leiva, Visiting Professor, Louisiana State University

Semilinear Control Systems with Impulses, Delays and Nonlocal Conditions.

Mathematical control theory is the area of applied mathematics dealing

with the analysis and synthesis of control systems. To control a system

means to influence its behavior so as to achieve a desired goal such as

stability, tracking, disturbance rejection or optimality with respect to

some performance criterion. For many control systems in real life,

impulses and delays are intrinsic phenomena that do not modify their

controllability. So we conjecture that, under certain conditions,

perturbations of the system caused by abrupt changes and delays do not

affect certain properties such as controllability.

In this investigation we apply Fixed Point Theorems to prove the

controllability of Semilinear Systems of Differential

Equations with Impulses, delays and Nonlocal Conditions.

Specifically, Under additional conditions we prove the following statement:

If the linear $\\acute{z}(t) = A(t)z(t) + B(t)u(t)$ is controllable on $[0, \\tau]$,

then the semilinear system $z^{\\prime}(t) = A(t)z(t) + B(t)u(t)+f(t,z(t),u(t))$

with impulses, delays, and nonlocal conditions is also controllable on $[0, \\tau]$.

Moreover, we could exhibit a control steering the semilinear system from an

initial state $z_0$ to a final state $z_1$ at time $\\tau >0$.

This is a recent research work with many questions and open problems.

Posted March 18, 2019

Control and Optimization Seminar Questions or comments?

10:30 am – 11:30 am 3316E Patrick F. Taylor HallTrying to Keep it Real: 25 Years of Trying to Get the Stuff I Learned in Grad School to Work on Mechatronic Systems

See https://www.lsu.edu/eng/ece/seminar/

Posted May 2, 2019

Last modified May 3, 2019

Control and Optimization Seminar Questions or comments?

3:00 pm – 4:00 pm 1263 Patrick F. Taylor Hall
Laurent Burlion, Rutgers University

Advanced Nonlinear Control Methods to Push Aerospace Systems to Their Limits

Abstract: Although often neglected in the design of flight control laws, nonlinearities must be taken into account either to get the best performance or to enlarge the flight envelope of controlled aerospace systems. Indeed, every system has a limited control authority and is subject to some safety constraints which impose limits on certain variables. In this talk, we will first present an overview of our recent applications of nonlinear control design methods to aerospace systems. Then, we will illustrate advanced nonlinear control techniques, including bounded backstepping, anti-windup and extended command governors, that were developed to execute an aircraft vision based landing on an unknown runway. Finally, we will discuss some ongoing research activities being conducted to provide drones with new capabilities, leading to a dramatic improvement in safety.

Posted November 21, 2019

Last modified November 24, 2019

Control and Optimization Seminar Questions or comments?

10:00 am 3316E Patrick F. Taylor Hall
Pavithra Prabhakar, Kansas State University

Robust Verification of Hybrid Systems

Information on ECE Seminar Web Site.

Posted March 4, 2021

Control and Optimization Seminar Questions or comments?

2:00 pm https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to get password)
Michael Malisoff, LSU
Roy P. Daniels Professor

Delay Compensation in Control Systems

Control systems are a class of dynamical systems that contain forcing terms. When control systems are used in biological or engineering applications, the forcing terms are often used to represent different possible forces that can be applied to the systems. Then the feedback control problem consists of finding formulas for the forcing terms, which are functions that can depend on the state of the systems, and which ensure a prescribed qualitative behavior of the dynamical systems, such as global asymptotic convergence towards an equilibrium point. Then the forcing terms are called feedback controls. However, many control systems in biology or engineering are subject to input delays, which preclude the possibility of using current values of the states of the control systems in the expressions for the feedback controls. One approach to solving feedback control problems under input delays involves solving the problems with the delays set equal to zero, and then computing upper bounds on the input delays that the systems can tolerate while still realizing the desired objective. For longer delays, the reduction model approach is often used but can lead to implementation challenges because it leads to distributed terms in the controls. A third approach to delay compensation involves sequential predictors, which can compensate for arbitrarily long input delays using stacks of differential equations instead of distributed terms. This talk reviews recent developments in this area, and is based in part on the speaker's collaborations with Miroslav Krstic, Frederic Mazenc, Fumin Zhang, and students. The talk will be understandable to those familiar with the basic theory of ordinary differential equations. No prerequisite background in systems and control will be needed to understand and appreciate this talk.

Posted March 10, 2021

Last modified March 11, 2021

Control and Optimization Seminar Questions or comments?

2:00 pm https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to get password)
Peter Wolenski, LSU Department of Mathematics
Russell B. Long Professor

Introduction to Convex Analysis via the Elvis Problem

The Elvis problem was introduced into the undergraduate mathematical literature by Timothy Pennings whose dog (named Elvis) enjoyed fetching an object thrown from the shore of Lake Michigan into the water. Elvis was observed to retrieve the object by going in a path that resembled how light would refract in isotropic mediums according to Snell's Law. We retain the problem's "Elvis" nomenclature but greatly generalize the problem by considering anisotropic mediums and use the tools of Convex Analysis to provide a complete description of optimal movement. The velocity sets are closed, bounded convex sets containing the origin in its interior, whereas the original problem used only centered balls. Further generalizations are considered with faster movement allowed on the interface and with more than two mediums.

Posted March 2, 2021

Control and Optimization Seminar Questions or comments?

2:00 pm https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to get password)
Romain Postoyan, CNRS Researcher

A Short Introduction to Event-Triggered Control

Control systems are increasingly implemented on digital platforms, which typically have limited power and computing and communication resources. In this context, the classical implementation of sampled-data control systems may not be suitable. Indeed, while the periodic transmission of data simplifies the analysis (in general) and the implementation of the control law, the induced use of the platform resources may be too demanding. An alternative consists of defining the transmission instants between the plant and the controller based on the actual system needs, and not the elapsed time since the last transmission. This alternative is the basis of event-triggered control. With this paradigm, a transmission occurs whenever a state/output-dependent criterion is violated. The key question is then how to define this triggering rule to ensure the desired control objectives, while guaranteeing the existence of a strictly positive minimum time between any two communications, which is essential in practice. In this presentation, we review basic techniques of the field, with particular attention to nonlinear systems, and compare them on examples. We also explain the interest of introducing auxiliary variables to define the transmission criterion, in which case we talk of dynamic event-triggered control. Finally, we conclude with some open problems.

Posted March 10, 2021

Last modified March 26, 2021

Control and Optimization Seminar Questions or comments?

2:00 pm https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request password)
Summer Atkins, University of Florida

Solving Singular Control Problems in Mathematical Biology, Using PASA

We will demonstrate how to use a non-linear polyhedral constrained optimization solver called the Polyhedral Active Set Algorithm (PASA) for solving a general optimal control problem that is linear in the control. In numerically solving for such a problem, oscillatory numerical artifacts can occur if the optimal control possesses a singular subarc. We consider adding a total variation regularization term to the objective functional of the problem to regularize these oscillatory artifacts. We then demonstrate PASA's performance on three singular control problems that give rise to different applications of mathematical biology.

Posted February 20, 2021

Last modified March 26, 2021

Control and Optimization Seminar Questions or comments?

2:00 pm https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request password)
Warren Dixon, University of Florida Department of MAE
Fellow of ASME and IEEE

Assured Autonomy: Uncertainty, Optimality, and Data Intermittency

Autonomous systems can provide advantages such as access, expendability, and scaled force projection in adversarial environments. However, such environments are inherently complex in the sense they are uncertain and data exchanges for sensing and communications can be compromised or denied. This presentation provides a deep dive into some feedback control perspectives related to uncertainty, optimality, and data intermittency that provide foundations for assured autonomous operations. New results will be described for guaranteed deep learning methods that can be employed in real-time with no data. These efforts include methods for (deep) reinforcement learning based approaches to yield approximate optimal policies in the presence of uncertainty. The presentation will conclude with examples of intermittent feedback that explore the data exchange limits for guaranteed operation, including purposeful intermittency to enable new capabilities. Specific examples include intermittency due to occlusions in image-based feedback and intermittency resulting from various network control problems.

Posted March 22, 2021

Last modified March 26, 2021

Control and Optimization Seminar Questions or comments?

2:00 pm https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request password)
Jean-Michel Coron, Universite Pierre et Marie Curie, France
Member of Institut Universitaire de France

Boundary Stabilization of 1-D Hyperbolic Systems

Hyperbolic systems in one space dimension appear in various real life applications, such as navigable rivers and irrigation channels, heat exchangers, plug flow chemical reactors, gas pipe lines, chromatography, and traffic flow. This talk will focus on the stabilization of these systems by means of boundary controls. Stabilizing feedback laws will be constructed. This includes explicit feedback laws which have been implemented for the regulation of the rivers La Sambre and La Meuse. The talk will also deal with the more complicated case where there are source terms.

Posted March 5, 2021

Last modified January 10, 2022

Control and Optimization Seminar Questions or comments?

2:00 pm https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request password)
Vincent Andrieu, CNRS

An Overview of Asymptotic Observer Design Methods

Dynamic observers are estimation algorithms allowing us to reconstruct missing data from a model of a dynamic system and information obtained from the measurements. In this presentation, we present the main methods allowing the synthesis of an asymptotic observer. Starting from necessary conditions inspired by Luenberger's work, we show the importance of contraction properties. Then, we give different existing methods. Finally, we give an overview of open issues in the field.

Posted March 18, 2021

Last modified January 10, 2022

Control and Optimization Seminar Questions or comments?

2:00 pm https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request password)
Lars Gruene, University of Bayreuth, Germany

On Turnpike Properties and Sensitivities and their Use in Model Predictive Control

Model predictive control (MPC) is one of the most popular modern control techniques. It generates a feedback-like control input from the iterated solution of open-loop optimal control problems. In recent years, there was a lot of progress in answering the question when MPC yields approximately optimal solutions. In this talk we will highlight the role of the turnpike property for this analysis. Moreover, we will show that for PDE-governed control problems the turnpike property can be seen as a particular instance of a more general sensitivity property. This can be used in order to obtain efficient discretization schemes for the numerical solution of the optimal control problems in the MPC algorithm.

Posted April 1, 2021

Last modified April 26, 2021

Control and Optimization Seminar Questions or comments?

10:00 am https://lsu.zoom.us/j/94269991036 (Contact Prof. Malisoff to request password)
Hiroshi Ito, Kyushu Institute of Technology

Constructions of Lyapunov Functions for Input-to-State Stability and Control of SIR Model

To predict the spread of infectious diseases, mathematical models have been playing an essential role. The most popular model, called the SIR model, describes the behavior of the relationship between populations of susceptible, infected and recovered individuals. The model exhibits bifurcation resulting in the emergence of the endemic equilibrium when the disease transmission rate is large, or the net flow of susceptible individuals entering the region is large. In many cases, societies cannot make the inflow small enough to directly eradicate a disease of high transmission rate. Investigating and confirming stability and robustness properties of both disease-free and endemic equilibria are important and useful for the prediction and control of infectious diseases. This presentation first provides a brief induction to the stability analysis, and then limitations of standard tools and results in mathematical epidemiology are explained from the standpoint of a control theorist. The presentation focuses on the theory of construction and the use of Lyapunov functions for the specific nonlinear dynamical system. Major attention is paid to strictness of Lyapunov functions specialized to disease models. A new result allows one to establish robustness of the SIR model with respect to the inflow perturbation in terms of input-to-state stability. The usefulness to be demonstrated in this presentation includes designing feedback control laws for infectious diseases with mass vaccination.

Posted March 18, 2021

Last modified April 17, 2021

Control and Optimization Seminar Questions or comments?

2:00 pm https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request password)
George Avalos, University of Nebraska

Mathematical Analysis of Interactive Fluid and Multilayered Structure PDE Dynamics

We discuss our recent work on a certain multilayered structure-fluid interaction (FSI) which arises in the modeling of vascular blood flow. The coupled PDE system under consideration mathematically accounts for the fact that mammalian veins and arteries are typically composed of various layers of tissues. Each layer will generally manifest its own intrinsic material properties, and will be separated from the other layers by thin elastic laminae. Consequently, the resulting modeling FSI system will manifest an additional PDE, which evolves on the boundary interface, to account for the thin elastic layer. (This is in contrast to the FSI PDEs which appear in the literature, wherein elastic dynamics are largely absent on the boundary interface.) As such, the PDE system will constitute a coupling of 3D fluid-2D wave-3D elastic dynamics. For this multilayered FSI system, we will in particular present results on well-posedness and stability. This is joint work with Pelin Guven Geredeli and Boris Muha.

Posted March 15, 2021

Last modified May 17, 2021

Control and Optimization Seminar Questions or comments?

2:00 pm https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request password)
Francesco Bullo, University of California, Santa Barbara
IEEE, IFAC, and SIAM Fellow

Non-Euclidean Contraction Theory and Network Systems

In this talk we discuss recent work on contraction theory and its application to network systems. First, we introduce weak semi-inner products as an analysis tool for non-Euclidean norms and establish equivalent characterizations of contraction and incremental stability. We also review robustness and network stability in this new setting. Second, we discuss the notion of weakly and semi-contracting systems. For weakly contracting systems we prove a dichotomy for asymptotic behavior of their trajectories and show asymptotic stability for certain non-Euclidean norms. For semi-contracting systems we study convergence to invariant subspaces and applications to networks of diffusively-coupled oscillators. This is joint work with Pedro Cisneros-Velarde, Alexander Davydov, and Saber Jafarpour.

Posted May 19, 2021

Control and Optimization Seminar Questions or comments?

2:00 pm https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request password)
Francesco Bullo, University of California, Santa Barbara
IEEE, IFAC, and SIAM Fellow

Non-Euclidean Contraction Theory and Network Systems

This is a continuation of last week’s Control and Optimization Seminar by the same speaker.

Posted September 21, 2021

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request password)
Michael Malisoff, LSU
Roy P. Daniels Professor

Event-Triggered Control Using a Positive Systems Approach

Control systems are a class of dynamical systems that contain forcing terms. When control systems are used in engineering applications, the forcing terms can represent forces that can be applied to the systems. Then the feedback control problem consists of finding formulas for the forcing terms, which are functions that can depend on the state of the systems, and which ensure a prescribed qualitative behavior of the dynamical systems, such as global asymptotic convergence towards an equilibrium point. Then the forcing terms are called feedback controls. Traditional feedback control methods call for continuously changing the feedback control values, or changing their values at a sequence of times that are independent of the state of the control systems. This can lead to unnecessarily frequent changes in control values, which can be undesirable in engineering applications. This motivated the development of event-triggered control, whose objective is to find formulas for feedback controls whose values are only changed when it is essential to change them in order to achieve a prescribed system behavior. This talk summarizes the speaker's recent research on event-triggered control theory and applications in marine robotics, which is collaborative with Corina Barbalata, Zhong-Ping Jiang, and Frederic Mazenc. The talk will be understandable to those familiar with the basic theory of ordinary differential equations. No prerequisite background in systems and control will be needed to understand and appreciate this talk.

Posted September 28, 2021

Last modified October 26, 2021

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request password)
Magnus Egerstedt, University of California, Irvine
Stacey Nicholas Dean of Engineering, IEEE Fellow, IFAC Fellow

Constraint-Based Control Design for Long Duration Autonomy

When robots are to be deployed over long time scales, optimality should take a backseat to “survivability”, i.e., it is more important that the robots do not break or completely deplete their energy sources than that they perform certain tasks as effectively as possible. For example, in the context of multi-agent robotics, we have a fairly good understanding of how to design coordinated control strategies for making teams of mobile robots achieve geometric objectives, such as assembling shapes or covering areas. But, what happens when these geometric objectives no longer matter all that much? In this talk, we consider this question of long duration autonomy for teams of robots that are deployed in an environment over a sustained period of time and that can be recruited to perform a number of different tasks in a distributed, safe, and provably correct manner. This development will involve the composition of multiple barrier certificates for encoding tasks and safety constraints through the development of non-smooth barrier functions, as well as a detour into ecology as a way of understanding how persistent environmental monitoring can be achieved by studying animals with low-energy life-styles, such as the three-toed sloth. Biography of Magnus Egerstedt.

Posted October 5, 2021

Last modified October 25, 2021

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request passcode)
Alberto Bressan, Penn State
Eberly Family Chair Professor

Optimal Control of Propagation Fronts and Moving Sets

We consider a controlled reaction-diffusion equation, modeling the spreading of an invasive population. Our goal is to derive a simpler model, describing the controlled evolution of a contaminated set. The first part of the talk will focus on the optimal control of 1-dimensional traveling wave profiles. Using Stokes' formula, explicit solutions are obtained, which in some cases require measure-valued optimal controls. In turn, this leads to a family of optimization problems for a moving set, related to the original parabolic problem via a sharp interface limit. In connection with moving sets, in the second part of the talk I will present some results on controllability, existence of optimal strategies, and necessary conditions. Examples of explicit solutions and several open questions will be also discussed. This is a joint research with Maria Teresa Chiri and Najmeh Salehi.

Posted September 21, 2021

Last modified October 11, 2021

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request passcode)
Ilya Kolmanovsky, University of Michigan
IEEE Fellow, AACC Eckman Awardee

Reference Governors for Control of Systems with Constraints

As systems are downsized and performance requirements become more stringent, there is an increasing need for methods that are able to enforce state and control constraints as a part of the control design. The constraints can represent actuator range and rate limits, safety and comfort limits, and obstacle avoidance requirements. Reference governors are add-on supervisory algorithms that monitor and, if necessary, modify commands that are passed to the nominal controller/closed-loop system to ensure that pointwise-in-time state and control constraints are not violated. Approaches to the construction of reference governors will be described along with the supporting theory. Recent extensions of reference governors, such as a controller state and reference governor (CSRG) that in addition to modifying references can reset the controller states, and opportunities for the application of reference governors to ensure feasibility of model predictive controllers, will be discussed. The learning reference governor, which integrates learning into the reference governor operation, to handle constraints in uncertain systems, will also be touched upon. The potential for the practical applications of reference governors will be illustrated with several examples.

Posted August 25, 2021

Last modified October 26, 2021

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request passcode)
Kyriakos Vamvoudakis, Georgia Institute of Technology

Learning-Based Actuator Placement and Receding Horizon Control for Security against Actuation Attacks

Cyber-physical systems (CPS) comprise interacting digital, analog, physical, and human components engineered for function through integrated physics and logic. Incorporating intelligence in CPS, however, makes their physical components more exposed to adversaries that can potentially cause failure or malfunction through actuation attacks. As a result, augmenting CPS with resilient control and design methods is of grave significance, especially if an actuation attack is stealthy. Towards this end, in the first part of the talk, I will present a receding horizon controller, which can deal with undetectable actuation attacks by solving a game in a moving horizon fashion. In fact, this controller can guarantee stability of the equilibrium point of the CPS, even if the attackers have an information advantage. The case where the attackers are not aware of the decision-making mechanism of one another is also considered, by exploiting the theory of bounded rationality. In the second part of the talk, and for CPS that have partially unknown dynamics, I will present an online actuator placement algorithm, which chooses the actuators of the CPS that maximize an attack security metric. It can be proved that the maximizing set of actuators is found in finite time, despite the CPS having uncertain dynamics. Biography of Kyriakos Vamvoudakis.

Posted September 27, 2021

Last modified November 3, 2021

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request passcode)
Silviu-Iulian Niculescu, Laboratoire des Signaux et Systèmes (L2S)

Delays in Interconnected Dynamical Systems: A Qualitative Analysis

It is well-known that interconnections of two or more dynamical systems lead to an increasing complexity of the overall systems’ behavior, due to the effects induced by the emerging dynamics (which may include feedback loops) in significant interactions (involving sensing and communication) with environmental changes. One of the major problems appearing in such interconnection schemes is related to the propagation, transport, and communication of delays acting through, and inside, the interconnections. The aim of this talk is to briefly present user-friendly methods and techniques (based in part on frequency-domain approaches) for the analysis and control of dynamical systems in the presence of delays. The presentation is as simple as possible, focusing on the main intuitive (and algebraic and geometric) ideas to develop theoretical results, and their potential use in practical applications. Single and multiple delays will be considered. The talk ends with illustrative examples.

Posted August 18, 2021

Last modified October 31, 2021

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request password)
Kirsten Morris, University of Waterloo
IEEE Fellow, SIAM Fellow

Optimal Controller and Actuator Design for Partial Differential Equations

Control can be very effective in altering dynamics. One issue for partial differential equations is that performance depends not only on the controller, but also on its location and spatial design. Existence of a concurrent optimal controller and spatial distribution has been established for several classes of partial differential equations and objectives. Some of these results will be discussed and illustrated with examples.

Posted September 20, 2021

Last modified November 12, 2021

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request passcode)
Sonia Martinez, University of California, San Diego
IEEE Fellow

Data-Driven Dynamic Ambiguity Sets: Precision Tradeoffs under Noisy Measurements

Stochastic and robust optimization constitute natural frameworks to solve decision-making and control problems subject to uncertainty. However, these fall short in addressing real-world scenarios for which models of the uncertainty are not available. Data-driven approaches can be of help to approximate such models, but typically require large amounts of data in order to produce performance-guaranteed results. Motivated by settings where the collection of data is costly and fast decisions need to be made online, we present recent work on the construction of dynamic ambiguity sets for uncertainties that evolves according to a dynamical law. In particular, we characterize the tradeoffs between the amount of progressively assimilated data and its future adequacy, due to its gradual precision loss in its predicted values.

Posted September 8, 2021

Last modified October 11, 2021

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request passcode)
Jorge Cortes, University of California, San Diego
IEEE Fellow, SIAM Fellow

Resource-Aware Control and Coordination of Cyberphysical Systems

Trading computation and decision making for less communication, sensing, or actuator effort offers great promise for the autonomous operation of both individual and interconnected cyberphysical systems. Resource-aware control seeks to prescribe, in a principled way, when to use the available resources efficiently while still guaranteeing a desired quality of service in performing the intended task. This talk describes advances of this paradigm along three interconnected thrusts: the design of triggering criteria that balance the trade-offs among performance, efficiency, and implementability; the synthesis of distributed triggers in network systems that can be evaluated by individual agents; and the benefits of flexibly interpreting what constitutes a resource. Throughout the presentation, we illustrate our discussion with applications to stabilization under information constraints, opportunistic actuation of safety-critical systems, and information exchanges in the coordination of multi-agent systems.

Posted September 27, 2021

Last modified October 11, 2021

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request passcode)
Yacine Chitour, Laboratoire des Signaux et Systèmes (L2S)

Worst Exponential Decay Rate for Degenerate Gradient Flows Subject to Persistency of Excitation

In this talk, I will present results for the estimation of the worst rate of exponential decay of degenerate gradient flows $\dot x = −Sx$, issued from adaptive control theory. Under persistent excitation assumptions on the positive semi-definite matrix $S$, upper bounds for this rate of decay consistent with previously known lower bounds are provided and analogous stability results for more general classes of persistently excited signals. The strategy of proof consists in relating the worst decay rate to optimal control questions and studying in detail their solutions. As a byproduct of our analysis, estimates for the worst $L_2$-gain of the time-varying linear control systems $\dot x = −cc^{\scriptscriptstyle T}x$ are obtained, where the signal $c$ is persistently excited. This is a joint work with Paolo Mason and Dario Prandi.

Posted January 13, 2022

Last modified January 17, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Joel Rosenfeld, University of South Florida

Dynamic Mode Decompositions for Control Affine Systems

We will review the machine learning technique of dynamic mode decomposition (or DMD) for continuous time systems and show how this may be extended to produce models for the state of an unknown control-affine systems using trajectory data. Trajectory data in this setting comes as a pair of control signals and the corresponding control trajectory, and the DMD method for control-affine systems enables the prediction of the action of the system in response to a previously unobserved control signal. This will require a discussion of reproducing kernel Hilbert spaces (or RKHSs), vector valued RKHSs, control Liouville operators, and multiplication operators.

Posted February 3, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click "Questions or comments?" to request Zoom link)
Michele Palladino, University of L’Aquila

Optimal Control of the Moreau’s Sweeping Process

We present recent and new results on the optimal control of Moreau’s sweeping process (SP). We will present a novel approach for proving a version of the Pontryagin Maximum Principle in a general setting. Such an approach exploits a kind of small-time local controllability property which the SP dynamics naturally satisfies in a neighborhood of the moving constraint. Open problems and further research directions will be extensively discussed.

Posted January 13, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click "Questions or comments?" to request Zoom link)
Emmanuel Trelat, Sorbonne Universite, Paris, France

On the Turnpike Property

The turnpike property was discovered in the 50's by Nobel prize winner Samuelson in economics. It stipulates that the optimal trajectory of an optimal control problem in large time remains essentially close to a steady state, itself being the optimal solution of an associated static optimal control problem. We have established the turnpike property for general nonlinear finite and infinite dimensional optimal control problems, showing that the optimal trajectory is, except at the beginning and the end of the time interval, exponentially close to some (optimal) stationary state, and that this property also holds for the optimal control and for the adjoint vector coming from the Pontryagin maximum principle. We prove that the exponential turnpike property is due to a hyperbolicity phenomenon which is intrinsic to the symplectic feature of the extremal equations. We infer a simple and efficient numerical method to compute optimal trajectories in that framework, in particular an appropriate variant of the shooting method. The turnpike property turns out to be ubiquitous and the turnpike set may be more general than a single steady-state, like for instance a periodic trajectory. We also show the shape turnpike property for PDE models in which a subdomain evolves in time according to some optimization criterion. These works are in collaboration with Gontran Lance, Can Zhang, and Enrique Zuazua.

Posted February 8, 2022

Last modified February 21, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click "Questions or comments?" to request Zoom link)
Cameron Nowzari, George Mason University

Implementable Event-Triggered Controllers for Networked Cyber-Physical Systems

Rapid development of technology is quickly leading us to an increasingly networked and wireless world. With massive wireless networks on the horizon, the efficient coordination of such large networks becomes an important consideration. To efficiently use the available resources, it is desirable to limit wireless communication to only the instances when the individual subsystems actually need attention. Unfortunately, classical time-triggered control systems are based on performing sensing, actuation, and even communication actions periodically in time rather than when it is necessary. This motivates the need to transcend this prevailing paradigm in exchange for event-triggered control (ETC); where individual subsystems must decide for themselves when to take different actions based on local information. The concept of ETC has been proposed as early as the 1960's but now we are starting to see practical applications. Since then, the idea of ETC has been surging in popularity to now essentially stand alone in the area of systems and control. This then begs the question: why is ETC not yet more mainstream and why has industry still not adopted it in most actual control systems? In this talk we look at this question and argue that the majority of ETC algorithms being proposed today are too theoretical to be useful. We then show how we are addressing this problem by developing a standard set of tools and methodologies for co-designing efficient event-triggered communication and control algorithms for networked systems that can actually be used by practitioners; with quantifiable benefits, performance guarantees, and robustness properties. This talk identifies numerous shortcomings between theoretical concepts and what is actually needed in practice for the theory to be useful, and discusses how we might close this gap. Finally, this talk will cover specific challenges we encountered in applying the state of-the-art event-triggered control algorithms to a wireless clock synchronization problem, and how we overcame them.

Posted February 3, 2022

Last modified February 24, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Pauline Bernard, MINES ParisTech

Observer Design for Continuous-Time Dynamical Systems

We review the main techniques of state observer design for continuous-time dynamical systems. Starting from necessary conditions for the existence of such asymptotic observers, we classify the available methods depending on the detectability/observability assumptions they require. We show how each class of observer relies on transforming the system dynamics into a particular normal form which allows the design of an observer, and how each observability condition guarantees the invertibility of its associated transformation and the convergence of the observer. A particular focus will be given to the promising theory of KKL or nonlinear Luenberger observers.

Posted January 29, 2022

Last modified February 1, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click "Questions or comments?" to request Zoom link)
Luc Jaulin, ENSTA-Bretagne

Interval Contractors to Solve Dynamical Geometrical Equations with Application to Underwater Robotics

In Euclidean space, the separation between distinct points corresponds to their distance and is purely spatial and positive. In space-time, the separation between events takes into account not only spatial separation between the events, but also their temporal separation. We will consider problems involving geometrical constraints in space-time in an underwater robotics context. The motion of the robots will be described by differential equations, and the clocks attached to each robot are not synchronized. An interval contractor based technique is used to solve the distributed state estimation problem. The method is illustrated on the localization of a group of underwater robots with unsynchronized clocks. In this problem, the travel time of the sound that gives us the distances between robots cannot be neglected.

Posted January 28, 2022

Last modified March 9, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Anton Selivanov, University of Sheffield, UK

Time-Delay Implementation of Derivative-Dependent Control

Time delays in input or output channels often lead to instability and, therefore, are usually avoided. However, there are systems where delays have a stabilizing effect. This happens because time-delays allow one to approximate output derivatives and use them in the feedback law. In this talk, I will consider an LTI system that can be stabilized using only output derivatives. The derivatives are approximated by finite differences, leading to time-delayed feedback. I will present a method for designing and analyzing such feedback under continuous-time and sampled measurements. It will be shown that, if the derivative-dependent control exponentially stabilizes the system, then its time-delayed approximation stabilizes the system with the same decay rate provided the time delay is small enough.

Posted January 13, 2022

Last modified February 21, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click "Questions or comments?" to request Zoom link)
Pierdomenico Pepe, University of L'Aquila

Sampled-Data Event-Based Stabilization of Retarded Nonlinear Systems

We present an event-based controller for the stabilization of nonlinear retarded systems. The main features of the controller we provide are that (i) only sampled-data measures of the Euclidean internal variable are needed, thus avoiding continuous-time monitoring of the state in infinite dimensional spaces, ii) the event function is only evaluated at sampling instants, and involves a finite number of most recent measures, and iii) discontinuous feedbacks and non- uniform sampling are allowed. The controller guarantees semi-global practical asymptotic stability to an arbitrarily small final target ball around the origin, by suitably fast sampling.

Posted January 31, 2022

Last modified April 6, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click "Questions or comments?" to request Zoom link)
Franco Rampazzo, Dipartimento di Matematica Pura ed Applicata, Università degli Studi di Padova

Goh and Legendre-Clebsch Conditions for Non-Smooth Optimal Control Problems

Various generalizations of the original Maximum Principle (Pontryagin et al., 1956) have been produced in different theoretical frameworks in the literature, starting from the pioneering works of F. Clarke in the 1970s up to recent papers. For an end-point constrained optimal control problem with control affine dynamics, I will present ideas (from a work in progress with F. Angrisani) in the direction of adding higher order necessary conditions to the Maximum Principle. In particular, one can generalize the classical Goh condition and the Legendre-Clebsch condition (which include Lie brackets) to the case where the data are nonsmooth. In fact, the recently introduced notion of Quasi Differential Quotient (Palladino and R., 2020) allows one to treat two simultaneous kinds of non-smoothness, namely the one concerning the adjoint inclusion and the one connected with the set-valued Lie brackets (R. and Sussmann 2001), within the same framework.

Posted February 6, 2022

Last modified February 18, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Xiaobo Tan, Michigan State University
IEEE Fellow, NSF CAREER Awardee

Modeling and Control of Hysteresis Using Minimal Representations

Hysteresis remains a key nonlinearity in magnetic and smart material actuators that challenges their control performance. High-fidelity modeling and effective compensation of hysteresis, yet with low computational complexity, are of immense interest. In this talk I will share some recent advances in this direction via several examples. First, I will present the optimal reduction problem for a Prandtl-Ishlinskii (PI) operator, one of the most popular hysteresis models, where an optimal approximation of the original operator with fewer constituent elements (play operators) is obtained via efficient dynamic programming. Second, I will discuss adaptive estimation of play radii, instead of their weights, as an alternative means for accurate modeling of hysteresis with a PI operator of low complexity. Finally, I will report a dynamic inversion approach to hysteresis compensation that requires minimal, qualitative conditions on the system model. Throughout the talk I will use experimental results from smart materials to illustrate the methods.

Posted January 27, 2022

Last modified February 15, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click "Questions or comments?" to request Zoom link)
Andrea Serrani, Ohio State University

Adaptive Feedforward Compensation of Harmonic Disturbances for Convergent Nonlinear Systems

Rejecting periodic disturbances occurring in dynamical systems is a fundamental problem in control theory, with numerous technological applications such as control of vibrating structures, active noise control, and control of rotating mechanisms. From a theoretical standpoint, any design philosophy aimed at solving this problem reposes upon a specific variant of the internal model principle, which states that regulation can be achieved only if the controller embeds a copy of the exogenous system generating the periodic disturbance. In the classic internal model control (IMC), the plant is augmented with a replica of the exosystem, and the design is completed by a unit which provides stability of the closed loop. In a somewhat alternative design methodology, referred to as adaptive feedforward compensation (AFC), a stabilizing controller for the plant is computed first and then an observer of the exosystem is designed to provide asymptotic cancelation of the disturbance at the plant input. In particular, the parameters of the feedforward control are computed adaptively by means of pseudo-gradient optimization, using the regulated error as a regressor. Contrary to IMC, which has been the focus of extensive investigation, application of AFC methods to nonlinear systems has remained so far elusive. This talk aims at presenting results that set the stage for a theory of AFC for nonlinear systems by providing a nonlinear equivalent of the condition for the solvability of the problem in the linear setting, and by re-interpreting classical linear schemes in a fully nonlinear setting. To this end, the problem is approached by combining methods from output regulation theory with techniques for semi-global stabilization.

Posted January 26, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click "Questions or comments?" to request Zoom link)
Sophie Tarboureich, Laboratoire d'Analyse et d'Architecture des Systèmes (LAAS), France

Algorithms for Event-Triggered Control

Event-triggered control consists of devising event-triggering mechanisms leading to only seldom control updates. In the context of event-triggered control, two objectives that can be pursued are (1) emulation, whereby the controller is a priori predesigned and only the event-triggering rules have to be designed and (2) co-design, where the joint design of the control law and the event-triggering conditions has to be performed. Control systems are connected to generic digital communication networks for implementation, transmission, coding, or decoding. Therefore, event-triggered control strategies have been developed to cope with communication, energy consumption, and computation constraints. The talk is within this scope. Considering linear systems, the design of event-triggering mechanisms using local information is described through linear matrix inequality (or LMI) conditions. From these conditions, the asymptotic stability of the closed loop system, together with the avoidance of Zeno behavior, are ensured. Convex optimization problems are studied to determine the parameters of the event-triggering rule with the goal of reducing the number of control updates.

Posted February 2, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Monica Motta, Università di Padova, Italy

Stabilizability in Optimal Control

We address the stabilizability of nonlinear control systems, in an optimal control theoretic framework. First, we extend the nowadays classical concepts of sampling and Euler solutions that were developed by F. Clarke, Y. Ledyaev, E. Sontag, A. Subbotin, and R.J. Stern (1997, 2000) for control systems associated to discontinuous feedbacks, by also considering corresponding costs, given by integrals of a nonnegative Lagrangian. In particular, we introduce the notions of sample and Euler stabilizability to a closed target set with regulated cost, which require the existence of a stabilizing feedback that keeps the cost of all sampling and Euler solutions starting from the same point below the same level. Then, under mild regularity hypotheses on the dynamics and on the Lagrangian, we prove that the existence of a special control Lyapunov function, to which we refer to as a minimum restraint function (or MRF), implies not only stabilizability, but also that all sample and Euler stabilizing trajectories have regulated costs. The proof is constructive, being based on the synthesis of appropriate feedbacks derived from the MRF. As in the case of classical control Lyapunov functions, this construction requires that the MRF is locally semiconcave. However, by generalizing an earlier result by L. Rifford (2000) we establish that it is possible to trade regularity assumptions on the data with milder regularity assumptions on the MRF. In particular, we show that if the dynamics and the Lagrangian are locally Lipschitz up to the boundary of the target, then the existence of a mere locally Lipschitz MRF provides sample and Euler stabilizability with regulated cost. This talk is based on a joint work with Anna Chiara Lai (from University Roma La Sapienza, Rome, Italy), which is part of an ongoing, wider investigation of global asymptotic controllability and stabilizability from an optimal control perspective.

Posted August 8, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Lorena Bociu, North Carolina State University
PECASE Awardee

Analysis and Control in Poroelastic Systems with Applications to Biomedicine

Fluid flows through deformable porous media are relevant for many applications in biology, medicine and bio-engineering, including tissue perfusion, fluid flow inside cartilages and bones, and design of bioartificial organs. Mathematically, they are described by quasi-static nonlinear poroelastic systems, which are implicit, degenerate, coupled systems of partial differential equations (PDE) of mixed parabolic-elliptic type. We answer questions related to tissue biomechanics via well-posedness theory, sensitivity analysis, and optimal control for the poroelastic PDE coupled systems mentioned above. One application of particular interest is perfusion inside the eye and its connection to the development of neurodegenerative diseases.

Posted August 23, 2022

Last modified September 5, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Sasa Rakovic, Beijing Institute of Technology

Minkowski, Lyapunov, and Bellman: Inequalities and Equations for Stability and Optimal Control

The classical Lyapunov and Bellman equations, and inequalities, are cornerstone objects in linear systems theory. These equations, and inequalities, are concerned with convex quadratic functions verifying stability in cases of the Lyapunov equation and ineqalities as well as optimality and stability in cases of the Bellman equation and inequalities. Rather peculiarly, prior to my work in the area, very little had been known about the related Lyapunov and Bellman equations, and inequalities, within the space of the Minkowski functions of nonempty convex compact subsets containing the origin in their interior. My recent research has provided complete characterizations of the solutions to the Lyapunov and Bellman equations, and inequalities, within the space of the Minkowski functions, referred to as the Minkowski-Lyapunov and Minkowski-Bellman equations, and inequalities, respectively. The talk reports key results underpinning the study of these fundamental equations and inequalities and their generalizations. The talk also renders strong evidence of topological flexibility and theoretical correctness of the developed frameworks and consequent advantages over the traditional Lyapunov and Bellman equations and inequalities.

Posted September 4, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Michael Margaliot, Tel Aviv University

Revisiting Totally Positive Differential Systems: A Tutorial and New Results

A matrix is called totally nonnegative (TN) if all of its minors are nonnegative, and totally positive (TP) if all its minors are positive. Multiplying a vector by a TN matrix does not increase the number of sign variations in the vector. In a largely forgotten paper, Schwarz (1970) considered matrices whose exponentials are TN or TP. He also analyzed the evolution of the number of sign changes in the vector solutions of the corresponding linear system. In a seemingly different line of research, Smillie (1984), Smith (1991), and others analyzed the stability of nonlinear tridiagonal cooperative systems by using the number of sign variations in the derivative vector as an integer-valued Lyapunov function. We provide a tutorial on these topics and show that they are intimately related. This allows us to derive generalizations of the results by Smillie (1984) and Smith (1991) while simplifying the proofs. This also opens the door to many new and interesting research directions. This is joint work with Eduardo D. Sontag from Northeastern University.

Posted September 14, 2022

Last modified September 15, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Maria Teresa Chiri, Queen's University

Soil Searching by an Artificial Root

We model an artificial root which grows in the soil for underground prospecting. Its evolution is described by a controlled system of two integro-partial differential equations: one for the growth of the body and the other for the elongation of the tip. At any given time, the angular velocity of the root is obtained by solving a minimization problem with state constraints. We prove the existence of solutions to the evolution problem, up to the first time where a "breakdown configuration" is reached. Some numerical simulations are performed, to test the effectiveness of our feedback control algorithm. This is a joint work with Fabio Ancona (University of Padova) and Alberto Bressan (Penn State University).

Posted August 22, 2022

Last modified September 28, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Christophe Prieur, Université Grenoble Alpes

Stabilization of Nonlinear PDE by Means of Nonlinear Boundary Controls

In this presentation, the focus will be on the design of boundary controls for distributed parameter systems such as those described by linear and nonlinear partial differential equations. Saturated controllers will be discussed in this talk such as those modeling feedback laws in the presence of amplitude constraints. We will review techniques for the stability analysis and the derivation of design conditions for various PDEs such as parabolic and hyperbolic ones. An application to nuclear fusion will conclude this lecture.

Posted August 16, 2022

Last modified September 24, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Matthew Peet, Arizona State University

An Algebraic Framework for Representation, Analysis, Control and Simulation of Delayed and PDEs

We explain the recently proposed partial integral equation representation and show how it enables us to solve many problems in analysis, control, and simulation of delayed and partial differential equations. We start by defining the *-algebra of partial integral (PI) operators. Next, we show that through a similarity transformation, the solution of a broad class of delayed and partial differential equations may be equivalently represented using a partial integral equation (PIE) - an equation parameterized by PI operators. We then show that many analysis and control problems for systems represented as a PIE may be solved through convex optimization of PI operators. Finally, we discuss software which automates the process of conversion to PIE, analysis, optimal controller synthesis, implementation, and simulation.

Posted September 12, 2022

Last modified October 17, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Craig Woolsey, Virginia Tech

Port-Hamiltonian Modeling and Energy-Based Control of Ocean and Atmospheric Vehicles

The dynamics of a wide variety of vehicles can be represented using noncanonical Hamiltonian system models with dissipation and exogenous inputs. The Hamiltonian structure captures energy exchange among subsystem elements, the noncanonical form accommodates rotating reference frames, and the exogenous inputs allow for control commands and for disturbances that are not readily incorporated into the Hamiltonian form. Because these models typically describe a system's behavior within a large region of state space, and because the system structure provides a natural starting point for Lyapunov-based control design, noncanonical Hamiltonian models are especially well-suited to developing large-envelope nonlinear control laws. The presentation will include several examples from the speaker's experience, such as space vehicles, autonomous underwater vehicles (AUVs), and uncrewed air vehicles (UAVs). A particular emphasis will be recent theoretical results, supported by experimental demonstrations, of passivity-based control laws for fixed-wing aircraft. In considering these examples, a unifying theme will emerge: recognizing and exploiting the nonlinear mechanical system structure of the governing equations to obtain provably effective control strategies.

Posted September 11, 2022

Last modified October 21, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Na (Lina) Li, Harvard University
Donald P. Eckman, AFOSR YIP, NSF CAREER, and ONR YIP Awardee

Scalable Distributed Control and Learning of Networked Dynamical Systems

Recent radical evolution in distributed sensing, computation, communication, and actuation has fostered the emergence of cyber-physical network systems. Regardless of the specific application, one central goal is to shape the network's collective behavior through the design of admissible local decision-making algorithms. This is nontrivial due to various challenges such as local connectivity, system complexity and uncertainty, limited information structure, and the complex intertwined physics and human interactions. In this talk, I will present our recent progress in formally advancing the systematic design of distributed coordination in network systems via harnessing special properties of the underlying problems and systems. In particular, we will present three examples and discuss three types of properties: i) how to use network structure to ensure the performance of the local controllers; ii) how to use the information and communication structure to develop distributed learning rules; iii) how to use domain-specific properties to further improve the efficiency of the distributed control and learning algorithms.

Posted June 12, 2022

Last modified September 14, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Naomi Leonard, Princeton University
MacArthur Fellow, and Fellow of ASME, IEEE, IFAC, and SIAM.

Nonlinear Opinion Dynamics on Networks: Agreeing, Disagreeing, and Avoiding Indecision

I will present continuous-time multi-option nonlinear opinion dynamics for a group of agents that observe or communicate opinions over a network. Nonlinearity is introduced by saturating opinion exchanges: this enables a wide range of analytically tractable opinion-forming behaviors, including agreement and disagreement, deadlock breaking, tunable sensitivity to input, oscillations, flexible transition between opinion configurations, and opinion cascades. I will discuss how network-dependent tuning rules can robustly control the system behavior and how state-feedback dynamics for model parameters make the behavior adaptive to changing external conditions. The model provides new means for systematic study and design of dynamics on networks in nature and technology, including the dynamics of decision-making, spreading processes, polarization, games, navigation, and task allocation. I will demonstrate with applications to multi-robot teams. This is joint work with Anastasia Bizyaeva and Alessio Franci and based on the paper https://doi.org/10.1109/TAC.2022.3159527 with reference to other key papers with additional collaborators, including https://doi.org/10.1109/LCSYS.2022.3185981 and https://doi.org/10.1109/LCSYS.2021.3138725.

Posted August 8, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Nader Motee, Lehigh University
AFOSR YIP and ONR YIP Awardee

Finite-Section Approximation of Carleman Linearization and Its Exponential Convergence

The Carleman linearization is one of the mainstream approaches to lift a finite-dimensional nonlinear dynamical system into an infinite-dimensional linear system with the promise of providing accurate finite-dimensional linear approximations of the original nonlinear system over larger regions around the equilibrium for longer time horizons with respect to the conventional first-order linearization approach. Finite-section approximations of the lifted system has been widely used to study dynamical and control properties of the original nonlinear system. In this context, some of the outstanding problems are to determine under what conditions, as the finite-section order (i.e., truncation length) increases, the trajectory of the resulting approximate linear system from the finite-section scheme converges to that of the original nonlinear system and whether the time interval over which the convergence happens can be quantified explicitly. In this talk, I will present explicit error bounds for the finite-section approximation and prove that the convergence is indeed exponential as a function of finite-section order. For a class of nonlinear systems, it is shown that one can achieve exponential convergence over the entire time horizon up to infinity. Our results are practically plausible, including approximating nonlinear systems for model predictive control and reachability analysis of nonlinear systems for verification, control, and planning purposes, as our proposed error bound estimates can be used to determine proper truncation lengths for a given sampling period.

Posted August 31, 2022

Last modified November 27, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Wei Ren, University of California Riverside
IEEE Fellow

Distributed Average Tracking and Continuous-time Optimization in Multi-agent Networks

We introduce a distributed average tracking problem and present distributed discontinuous control algorithms to solve the problem. The idea of distributed average tracking is that multiple agents track the average of multiple time-varying reference signals in a distributed manner based only on local information and local communication with adjacent neighbors. We study cases where the time-varying reference signals have bounded derivatives and accelerations. We also use the distributed average tracking idea to solve a continuous-time distributed convex optimization problem. Tools from nonsmooth analysis are used to analyze the stability of the systems. Simulation and experimental results are presented to illustrate the theoretical results.

Posted September 14, 2022

Last modified September 27, 2022

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Maryam Yashtini, Georgetown University

Counting Objects by Diffused Index: Geometry-Free and Training-Free Approach

Counting objects is a fundamental but challenging problem. In this talk, we propose diffusion-based, geometry-free, and learning-free methodologies to count the number of objects in images. The main idea is to represent each object by a unique index value regardless of its intensity or size, and to simply count the number of index values. First, we place different vectors, referred to as seed vectors, uniformly throughout the mask image. The mask image has boundary information of the objects to be counted. Secondly, the seeds are diffused using an edge-weighted harmonic variational optimization model within each object. We propose an efficient algorithm based on an operator splitting approach and alternating direction minimization method, and theoretical analysis of this algorithm is given. An optimal solution of the model is obtained when the distributed seeds are completely diffused such that there is a unique intensity within each object, which we refer to as an index. For computational efficiency, we stop the diffusion process before a full convergence, and propose to cluster these diffused index values. We refer to this approach as Counting Objects by Diffused Index (CODI). We explore scalar and multi-dimensional seed vectors. For scalar seeds, we use Gaussian fitting in a histogram to count, while for vector seeds, we exploit a high-dimensional clustering method for the final step of counting via clustering. The proposed method is flexible even if the boundary of the object is not clear nor fully enclosed. We present counting results in various applications such as biological cells, agriculture, concert crowds, and transportation. Some comparisons with existing methods are presented.

Posted December 12, 2022

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Giulia Giordano, University of Trento, Italy
SIAG on Control and Systems Theory Prize Awardee

What We Can Learn from the System Structure in Biology and Epidemiology

Biological, ecological and epidemiological systems can be seen as dynamical networks, namely dynamical systems that are naturally endowed with an underlying network structure, because they are composed of subsystems that interact according to an interconnection. Despite their large scale and complexity, natural systems often exhibit extraordinary robustness that preserves fundamental properties and qualitative behaviors even in the presence of huge parameter variations and environmental fluctuations. First, we focus on biochemical reaction networks and look for the source of the amazing robustness that often characterizes them, by identifying properties and emerging behaviors that exclusively depend on the system structure (i.e., the graph structure along with qualitative information), regardless of parameter values. We introduce the BDC-decomposition to capture the system structure and enable the parameter-free assessment of important properties, including the stability of equilibria and the sign of steady-state input-output influences, thus allowing structural model falsification and structural comparison of alternative mechanisms proposed to explain the same phenomenon. Then, inspired by the COVID-19 pandemic and the observation that compartmental models for epidemics can be seen as a special class of chemical reaction networks, we consider epidemiological systems describing the spread of infectious diseases within a population, along with control approaches to curb the contagion. We illustrate strategies to cope with the deep uncertainty affecting parameter values and optimally control the epidemic.

Posted January 27, 2023

Last modified January 29, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Raphael Jungers, Université Catholique de Louvain

Data-Driven Control of Hybrid Systems and Chance-Constrained Optimization

Control systems are increasingly complex, often to the point that obtaining a model for them is out of reach. In some situations, (parts of) the systems are proprietary, so that the very equations that rule their behavior cannot be known. On the other hand, the ever-growing progress in hardware technologies often enables one to retrieve massive data, e.g., from embedded sensors. Due to these evolutions, control theory can alternatively be viewed as a model-free and data-driven paradigm. For linear time-invariant systems, classical results from identification theory provide a rather straightforward approach. However, these approaches become least inefficient if one relaxes the assumptions they rely upon, e.g., linearity, Gaussian noise, etc. This is especially the case in safety-critical applications, where one needs guarantees on the performance of the obtained solution. Despite these difficulties, one may sometimes recover firm guarantees on the behavior of the system. This may require changing one's point of view on the nature of the guarantees we seek. I will provide examples of such results for different control tasks and different complex systems, and will raise the question of theoretical fundamental barriers for these problems.

Posted December 12, 2022

Last modified February 8, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Patrick L. Combettes, North Carolina State University
IEEE Fellow

Perspective Functions

I will discuss mathematical and computational issues pertaining to perspective functions, a powerful concept that makes it possible to extend a convex function to a jointly convex one in terms of an additional scale variable. Recent results on perspective functions with nonlinear scales will also be discussed. Applications to inverse problems and statistics will also be presented.

Posted February 3, 2023

Last modified February 6, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Enrique Zuazua, Friedrich-Alexander-Universität Erlangen-Nürnberg
2022 SIAM W.T. and Idalia Reid Prize Winner

Control and Machine Learning

In this lecture, we present some recent results on the interplay between control and machine learning, and more precisely, supervised learning and universal approximation. We adopt the perspective of the simultaneous or ensemble control of systems of residual neural networks (or ResNets). Roughly, each item to be classified corresponds to a different initial datum for the Cauchy problem of the ResNets, leading to an ensemble of solutions to be driven to the corresponding targets, associated to the labels, by means of the same control. We present a genuinely nonlinear and constructive method, allowing us to show that such an ambitious goal can be achieved, estimating the complexity of the control strategies. This property is rarely fulfilled by the classical dynamical systems in mechanics and the very nonlinear nature of the activation function governing the ResNet dynamics plays a determinant role. It allows deformation of half of the phase space while the other half remains invariant, a property that classical models in mechanics do not fulfill. The turnpike property is also analyzed in this context, showing that a suitable choice of the cost functional used to train the ResNet leads to more stable and robust dynamics. This lecture is inspired in joint work, among others, with Borjan Geshkovski (MIT), Carlos Esteve (Cambridge), Domènec Ruiz-Balet (IC, London) and Dario Pighin (Sherpa.ai).

Posted January 17, 2023

Last modified February 27, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Spring Berman, Arizona State University
DARPA Young Faculty and ONR Young Investigator Awardee

Scalable Control of Robotic Swarms with Limited Information

Robotic swarms are currently being developed for many applications, including environmental sensing, exploration and mapping, infrastructure inspection, disaster response, agriculture, and logistics. However, significant technical challenges remain before they can be robustly deployed in uncertain, dynamic environments. We are addressing the problem of controlling swarms of robots that lack prior data about the environment and reliable inter-robot communication. As in biological swarms, the highly resource-constrained robots would be restricted to information obtained through local sensing and signaling. We are developing scalable control strategies that enable swarms to operate largely autonomously, with user input consisting only of high-level directives that map to a small set of robot parameters. In this talk, I describe control strategies that we have designed for collective tasks that include coverage, mapping, and cooperative manipulation. We develop and analyze models of the swarm at different levels of abstraction based on differential equations, Markov chains, and graphs, and we design robot controllers using feedback control theory and optimization techniques. We validate our control strategies in simulation and on experimental test beds with small mobile robots.

Posted November 30, 2022

Last modified February 28, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Jacquelien Scherpen, University of Groningen
IEEE Fellow, Automatica Best Paper Prize Awardee

Model Reduction for Nonlinear Control Systems Based on Differential Balancing and Data

We present the standard balancing theory for nonlinear systems, which is based on an analysis around equilibrium points. Its extension to the contraction framework offers computational advantages, and is presented as well. We provide definitions for controllability and observability functions and their differential versions which can be used for simultaneous diagonalization procedures, providing a measure for importance of the states, as can be shown by the relation to the Hankel operator. In addition, we propose a data-based model reduction method based on differential balancing for nonlinear systems whose input vector fields are constants by utilizing its variational system. The difference between controllability and reachability for the variational system is exploited for computational reasons. For a fixed state trajectory, it is possible to compute the values of the differential Gramians by using impulse and initial state responses of the variational system. Therefore, differential balanced truncation is doable along state trajectories without solving nonlinear partial differential equations.

Posted February 14, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Frank Allgower, University of Stuttgart
IFAC Fellow

Data-driven Model Predictive Control: Concepts, Algorithms and Properties

While recent years have shown rapid progress of learning-based and data-driven methods to effectively utilize data for control tasks, providing rigorous theoretical guarantees for such methods is challenging and an active field of research. This talk will be about a recently developed framework for model predictive control (MPC) of unknown systems based only on input-output data which admits exactly such guarantees. The proposed approach relies on the Fundamental Lemma of Willems et al. which parametrizes trajectories of unknown linear systems using data. First, we cover MPC schemes for linear systems with a focus on theoretical guarantees for the closed loop, which can be derived even if the data are noisy. Building on these results, we then move towards the general, nonlinear case. Specifically, we present a data-driven MPC approach which updates the data used for prediction online at every time step and, thereby, stabilizes unknown nonlinear systems using only input-output data. In addition to introducing the framework and the theoretical results, we also report on successful applications of the proposed framework in simulation and real-world experiments.

Posted February 13, 2023

Last modified April 9, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Ningshi Yao, George Mason University

Resolving Contentions Through Real-Time Control and Scheduling for Cyber Physical Human Systems

Shared resources, such as cloud computing and communication networks, are widely used in large-scale real-time systems to increase modularity and flexibility. When multiple systems need to access a shared resource at the same time and the demands exceed the total supply, a contention occurs. A scheduling strategy is needed to determine which systems can access the resource first to resolve contentions. However, such a scheduling mechanism inevitably introduces time-varying delays and may degrade the system performance or even sabotage the stability of control systems. Considering the coupling between scheduling and control, this talk presents a novel sample-based method to co-design scheduling strategies and control laws for coupled control systems with shared resources, which aims to minimize the overall performance degradation caused by contentions. The co-design problem is formulated as a mixed integer optimization problem with a very large search space, rendering difficulty in computing the optimal solution. To solve this challenge, we describe a contention resolving model predictive control (CRMPC) method to dynamically design optimal scheduling and control in real-time. With fundamental assumptions in scheduling theory, the solution computed by CRMPC can be proved to be globally optimal. CRMPC is a theoretical framework that is general and can be applied to many applications in cyber-physical-human systems. The effectiveness of CRMPC has been verified in real-world applications, such as networked control systems, traffic intersection management systems, and human multi-robot collaboration systems. The performance of CRMPC was compared with well-known scheduling methods and demonstrated significant improvements.

Posted December 12, 2022

Last modified April 11, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Maria Elena Valcher, University of Padova
Fellow of IEEE and of IFAC

On the Influence of Homophily on the Friedkin-Johnsen Model

Over the last few decades, the modelling and analysis of sociological phenomena have attracted the interests of researchers from various fields, such as sociology, economics, and mathematics. Opinion dynamics models aim to describe and predict the evolution of the opinions of a group of individuals as a result of their mutual influence/appraisal. One of the most celebrated opinion dynamics models is the Friedkin-Johnsen (FJ) model, that captures the attitude of individuals to form their opinions by balancing exogenous and endogenous influences. On the one hand they value the opinions of the other individuals, weighted by the appraisals they have of them, and on the other hand they tend to adhere to their original opinions, that represent a permanent bias, to an extent that depends on the agent stubbornness. In the classical FJ model the weights that each agent gives to the opinions of the others are fixed. However, this is not consistent with other opinion dynamics models, where the weight matrix is time varying and it updates based on a homophily model: individuals decide which individuals they want to be influenced by (and on the contrary which individuals they want to distance their opinions from) based on the correlation between their opinion vectors. In this talk we will explore some recent results regarding this extended FJ model and present some future directions and challenges related to opinion dynamics models.

Posted January 17, 2023

Last modified April 16, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Weiwei Hu, University of Georgia

Optimal Control for Suppression of Singularity in Chemotaxis

In this talk, we discuss the problem of optimal control for chemotaxis governed by the parabolic-elliptic Patlak-Keller-Segel (PKS) system via flow advection. The main idea is to utilize flow advection for enhancing diffusion as to control the nonlinear behavior of the system. The objective is to determine an optimal strategy for adjusting flow strength for advection so that the local in time blow up of the solution can be suppressed. Rigorous proof of existence of an optimal solution and derivation of first-order optimality conditions for solving such a solution are presented. Numerical experiments based on 2D cellular flows in a rectangular domain are conducted to demonstrate our ideas and designs.

Posted January 26, 2023

Last modified April 2, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Wim Michiels, KU Leuven

Strong Relative Degree of Time-Delay Systems with Non-Commensurate Delays

The presentation addresses the notion of relative degree for linear time-delay systems of retarded type, when the common assumption of commensurate delays is dropped. Algebraic conditions are provided that fully exploit the delay dependence structure. It is shown that the relative degree may be sensitive to delay perturbations, which is the basis of a novel notion of relative degree, called strong relative degree. This notion is characterized algebraically and computationally in the SISO and MIMO settings. Using the obtained characterizations and a benchmark problem, which illustrates that invariant zeros may be characterized as zeros of quasi-polynomials of retarded, neutral or advanced type, light is shed on existence conditions of a normal form. The novel concepts and theoretical results also play a role in the design and analysis of extended PD controllers, as illustrated. Finally, connections are established with the notion of strong stability and strong H2-norm for delay equations of neutral type and delayed descriptor systems. This work is in collaboration with Bin Zhou from Harbin University of Technology.

Posted February 13, 2023

Last modified April 25, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Matthew Hale, University of Florida
AFOSR Young Investigator, ONR Young Investigator, and NSF CAREER Program Awardee

Resilient Multi-Agent Coordination: From Theory to Practice

A multi-agent system is any collection of decision-makers that collaborates on a common task. A distinguishing feature is that communications among agents provide the feedback signals needed for autonomous decision-making. For example, a team of drones may exchange location data and images to jointly map an area. There is now a large literature on multi-agent systems, though practical implementations are often fragile or only done in controlled environments. A fundamental challenge is that agents’ communications in realistic environments can be impaired, e.g., by delays and intermittency, and thus agents must rely on impaired feedback. To transition theory to practice, such systems need novel coordination techniques that are provably resilient to such impairments and validated in practice under realistic conditions. In this talk, I will cover two recent developments in my group that have successfully transitioned novel theory to practice for multi-agent systems facing asynchronous communications. The first considers a class of geometrically complex coordination tasks – namely those given by constrained nonconvex programs – and provides provable guarantees of performance that are borne out in practice onboard teams of drones. The second considers a class of time-varying task specifications for agents that can change unpredictably. Theoretical results show that agents can complete this class of task under mild restrictions, and validation is provided by a team of lighter-than-air agents in a contested environment.

Posted August 25, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Cristopher Hermosilla, Universidad Técnica Federico Santa María

Hamilton-Jacobi-Bellman Approach for Optimal Control Problems of Sweeping Processes

This talk is concerned with a state constrained optimal control problem governed by a Moreau's sweeping process with a controlled drift. The focus of this work is on the Bellman approach for an infinite horizon problem. In particular, we focus on the regularity of the value function and on the Hamilton-Jacobi-Bellman equation it satisfies. We discuss a uniqueness result and we make a comparison with standard state constrained optimal control problems to highlight a regularizing effect that the sweeping process induces on the value function. This is a joint work with Michele Palladino (University of L’Aquila, Italy) and Emilio Vilches (Universidad de O’Higgins, Chile).

Posted August 18, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Mario Sznaier, Northeastern University
IEEE Fellow, IEEE Control Systems Society Distinguished Member Awardee

Why Do We Need Control in Control Oriented Learning?

Despite recent advances in machine learning (ML), the goal of designing control systems capable of fully exploiting the potential of these methods remains elusive. Modern ML can leverage large amounts of data to learn powerful predictive models, but such models are not designed to operate in a closed-loop environment. Recent results on reinforcement learning offer a tantalizing view of the potential of a rapprochement between control and learning, but so far proofs of performance and safety are mostly restricted to limited cases. Thus, learning elements are often used as black boxes in the loop, with limited interpretability and less than completely understood properties. Further progress hinges on the development of a principled understanding of the limitations of control-oriented machine learning. This talk will present some initial results unveiling the fundamental limitations of some popular learning algorithms and architectures when used to control a dynamical system. For instance, it shows that even though feed forward neural nets are universal approximators, they are unable to stabilize some simple systems. We also show that a recurrent neural net with differentiable activation functions that stabilizes a non-strongly stabilizable system must itself be open loop unstable, and discuss the implications of this for training with noisy, finite data. Finally, we present a simple system where any controller based on unconstrained optimization of the parameters of a given structure fails to render the closed loop system input-to-state stable. The talk finishes by arguing that when the goal is to learn stabilizing controllers, the loss function should reflect closed loop performance, which can be accomplished using gap-metric motivated loss functions, and presenting initial steps towards that goal.

Posted August 18, 2023

Last modified September 11, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Cristina Pignotti, Università degli Studi dell'Aquila

Consensus Results for Hegselmann-Krause Type Models with Time Delay

We study Hegselmann-Krause (HK) opinion formation models in the presence of time delay effects. The influence coefficients among the agents are nonnegative, as usual, but they can also degenerate. This includes, e.g., the case of on-off influence, namely the agents do not communicate over some time intervals. We give sufficient conditions ensuring that consensus is achieved for all initial configurations. Moreover, we analyze the continuity type equation obtained as the mean-field limit of the particle model when the number of agents goes to infinity. Finally, we analyze a control problem for a delayed HK model with leadership and design a simple control strategy steering all agents to any fixed target opinion.

Posted September 12, 2023

Last modified October 11, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Melvin Leok, University of California, San Diego

Connections Between Geometric Mechanics, Information Geometry, Accelerated Optimization and Machine Learning

Geometric mechanics describes Lagrangian and Hamiltonian mechanics geometrically, and information geometry formulates statistical estimation, inference, and machine learning in terms of geometry. A divergence function is an asymmetric distance between two probability densities that induces differential geometric structures and yields efficient machine learning algorithms that minimize the duality gap. The connection between information geometry and geometric mechanics will yield a unified treatment of machine learning and structure-preserving discretizations. In particular, the divergence function of information geometry can be viewed as a discrete Lagrangian, which is a generating function of a symplectic map, that arise in discrete variational mechanics. This identification allows the methods of backward error analysis to be applied, and the symplectic map generated by a divergence function can be associated with the exact time-h flow map of a Hamiltonian system on the space of probability distributions. We will also discuss how time-adaptive Hamiltonian variational integrators can be used to discretize the Bregman Hamiltonian, whose flow generalizes the differential equation that describes the dynamics of the Nesterov accelerated gradient descent method.

Posted August 22, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Eduardo Cerpa, Pontificia Universidad Católica de Chile
SIAM Activity Group on Control and Systems Theory Prize Recipient

Control and System Theory Methods in Neurostimulation

Electrical stimulation therapies are used to treat the symptoms of a variety of nervous system disorders. Recently, the use of high frequency signals has received increased attention due to its varied effects on tissues and cells. In this talk, we will see how some methods from Control and System Theory can be useful to address relevant questions in this framework when the FitzHugh-Nagumo model of a neuron is considered. Here, the stimulation is through the source term of an ODE and the level of neuron activation is associated with the existence of action potentials which are solutions with a particular profile. A first question concerns the effectiveness of a recent technique called interferential currents, which combines two signals of similar kilohertz frequencies intended to activate deeply positioned cells. The second question is about how to avoid the onset of undesirable action potentials originated when signals that produce conduction block are turned on. We will show theoretical and computational results based on methods such as averaging, Lyapunov analysis, quasi-static steering, and others.

Posted August 22, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Philip E. Paré, Purdue University

Modeling, Estimation, and Analysis of Epidemics over Networks

We present and analyze mathematical models for network-dependent spread. We use the analysis to validate a SIS (susceptible-infected-susceptible) model employing John Snow’s classical work on cholera epidemics in London in the 1850’s. Given the demonstrated validity of the model, we discuss control strategies for mitigating spread, and formulate a tractable antidote administration problem that significantly reduces spread. Then we formulate a parameter estimation problem for an SIR (susceptible-infected-recovered) networked model, where costs are incurred by measuring different nodes' states and the goal is to minimize the total cost spent on collecting measurements or to optimize the parameter estimates while remaining within a measurement budget. We show that these problems are NP-hard to solve in general and propose approximation algorithms with performance guarantees. We conclude by discussing an ongoing project where we are developing online parameter estimation techniques for noisy data and time-varying epidemics.

Posted January 18, 2023

Last modified October 30, 2023

Control and Optimization Seminar Questions or comments?

10:30 am 233 Lockett and Zoom (Click “Questions or Comments?” to request a Zoom link)
Maruthi Akella, University of Texas
Fellow of AIAA, IEEE, and AAS

Sub-Modularity Measures for Learning and Robust Perception in Aerospace Autonomy

Onboard learning and robust perception can be generally viewed to characterize autonomy as overarching system-level properties. The complex interplay between autonomy and onboard decision support systems introduces new vulnerabilities that are extremely hard to predict with most existing guidance and control tools. In this seminar, we review some recent advances in learning-oriented and information-aware path- planning, and sub-modularity metrics for non-myopic sensor scheduling for “plug-and- play” systems. The concept of “learning-oriented” path-planning is realized through certain new classes of exploration inducing distance metrics. These technical foundations will be highlighted through aerospace applications with active learning inside dynamic and uncertain environments.

Posted September 2, 2023

Last modified November 15, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Sean Meyn, University of Florida
Robert C. Pittman Eminent Scholar Chair, IEEE Fellow, IEEE CSS Distinguished Lecturer

Stochastic Approximation and Extremum Seeking Control

Stochastic approximation was introduced in the 1950s to solve root finding problems, of which optimization is a canonical application. It is argued in recent work that extremum seeking control (ESC), a particular approach to gradient-free optimization with an even longer history, can be cast as quasi-stochastic approximation (QSA). In this lecture, we will go through the basics of these (until now) disparate fields. Application of QSA theory to ESC leads to several significant conclusions, including that ESC is not globally stable, as examples show. Careful application of QSA theory leads to new algorithms that are stable without any loss of performance. Also, QSA theory immediately provides asymptotic and transient bounds, providing guidelines for algorithm design. In addition to surveying this general theory, the talk provides a tutorial on design principles through numerical studies.

Posted September 29, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Hélène Frankowska, Sorbonne University

Differential Inclusions on Wasserstein Spaces

Optimal control in Wasserstein spaces addresses control of systems with large numbers of agents. It is well known that for optimal control of ODEs, the differential inclusions theory provides useful tools to investigate existence of optimal controls, necessary optimality conditions and Hamilton-Jacobi- Bellman equations. Recently, many models arising in social sciences used the framework of Wasserstein spaces, i.e. metric spaces of Borel probability measures endowed with the Wasserstein metric. This talk is devoted to a recent extension given in [1] of the theory of differential inclusions to the setting of general Wasserstein spaces. In the second part of the talk, necessary and sufficient conditions for the existence of solutions to state-constrained continuity inclusions in Wasserstein spaces, whose right-hand sides may be discontinuous in time, are provided; see [2]. These latter results are based on a fine investigation of the infinitesimal behavior of the underlying reachable sets, which heuristically amounts to showing that up to a negligible set, every admissible velocity can be realized as the metric derivative of a solution of the continuity inclusion, and vice versa. Building on these results, necessary and sufficient geometric conditions for the viability and invariance of stationary and time-dependent constraints, which involve a suitable notion of contingent cones in Wasserstein spaces, are established. Viability and invariance theorems in a more restrictive framework were already applied in [5], [6] to investigate stability of controlled continuity equations and uniqueness of solutions to HJB equations. The provided new tools allow us to get similar results in general Wasserstein spaces. References: [1] BONNET B. and FRANKOWSKA H., Caratheodory Theory and a Priori Estimates for Continuity Inclusions in the Space of Probability Measures, preprint https://arxiv.org/pdf/2302.00963.pdf, 2023. [2] BONNET B. and FRANKOWSKA H., On the Viability and Invariance of Proper Sets under Continuity Inclusions in Wasserstein Spaces, SIAM Journal on Mathematical Analysis, to appear. [3] BONNET B. and FRANKOWSKA H., Differential inclusions in Wasserstein spaces: the Cauchy-Lipschitz framework, Journal of Diff. Eqs. 271: 594-637, 2021. [4] BONNET B. and FRANKOWSKA H., Mean-field optimal control of continuity equations and differential inclusions, Proceedings of 59th IEEE Conference on Decision and Control, Republic of Korea, December 8-11, 2020: 470-475, 2020. [5] BONNET B. and FRANKOWSKA H., Viability and exponentially stable trajectories for differential inclusions in Wasserstein spaces, Proceedings of 61st IEEE Conference on Decision and Control, Mexico, December 6-9, 2022: 5086-5091, 2022. [6] BADREDDINE Z. and FRANKOWSKA H., Solutions to Hamilton-Jacobi equation on a Wasserstein space, Calculus of Variations and PDEs 81: 9, 2022.

Posted September 8, 2023

Last modified November 14, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Meeko Oishi, University of New Mexico
NSF BRITE Fellow

Human-Centered Probabilistic Planning and Control

Although human interaction with autonomous systems is becoming ubiquitous, few tools exist for planning and control of autonomous systems that account for human uncertainty and decision making. We seek methods for probabilistic verification and control that can help ensure compatibility of autonomous systems with human decision making and human uncertainty. This requires the development of theory and computational tools that can accommodate arbitrary, non-Gaussian uncertainty for both probabilistic verification and control, potentially without high confidence models. This talk will focus on our work in probabilistic verification of ReLU neural nets, data-driven stochastic optimal control and stochastic reachability. Our approaches to probabilistic verification are based in Fourier transforms and chance constrained optimization, and our approaches to data-driven stochastic planning and control are based in conditional distribution embeddings. Both of these approaches enable computation without gridding, sampling, or recursion. We also present recent work on data-driven tools for high fidelity modeling and characterization of human-in-the-loop trajectories, that accommodate dynamic processes with probabilistic human inputs.

Posted January 11, 2024

Last modified January 17, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (Click “Questions or Comments?” to request a Zoom link)
Boris Mordukhovich, Wayne State University
AMS Fellow, SIAM Fellow

Optimal Control of Sweeping Processes with Applications

This talk is devoted to a novel class of optimal control problems governed by sweeping (or Moreau) processes that are described by discontinuous dissipative differential inclusions. Although such dynamical processes, strongly motivated by applications, first appeared in the 1970s, optimal control problems for them have only been formulated quite recently and were found to be complicated from the viewpoint of developing control theory. Their study and applications require advanced tools of variational analysis and generalized differentiation, which will be presented in this talk. Combining this machinery with the method of discrete approximations leads us to deriving new necessary optimality conditions and their applications to practical models in elastoplasticity, traffic equilibria, and robotics. This talk is based on joint work with Giovanni Colombo (University of Padova), Dao Nguyen (San Diego State University), and Trang Nguyen (Wayne State University).

Posted February 2, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (Click “Questions or Comments?” to request a Zoom link)
Ali Kara, University of Michigan

Stochastic Control with Partial Information: Optimality, Stability, Approximations and Learning

Partially observed stochastic control is an appropriate model for many applications involving optimal decision making and control. In this talk, we will first present a general introduction and then study optimality, approximation, and learning theoretic results. For such problems, existence of optimal policies have in general been established via reducing the original partially observed stochastic control problem to a fully observed one with probability measure valued states. However, computing a near-optimal policy for this fully observed model is challenging. We present an alternative reduction tailored to an approximation analysis via filter stability and arrive at an approximate finite model. Toward this end, we will present associated regularity and Feller continuity, and controlled filter stability conditions: Filter stability refers to the correction of an incorrectly initialized filter for a partially observed dynamical system with increasing measurements. We present explicit conditions for filter stability which are then utilized to arrive at approximately optimal solutions. Finally, we establish the convergence of a learning algorithm for control policies using a finite history of past observations and control actions (by viewing the finite window as a 'state') and establish near optimality of this approach. As a corollary, this analysis establishes near optimality of classical Q-learning for continuous state space stochastic control problems (by lifting them to partially observed models with approximating quantizers viewed as measurement kernels) under weak continuity conditions. Further implications and some open problems will also be discussed.

Posted December 28, 2023

Last modified February 20, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (Click “Questions or Comments?” to request a Zoom link)
Huyên Pham
Editor-in-Chief for SIAM Journal on Control and Optimization, 2024-

A Schrödinger Bridge Approach to Generative Modeling for Time Series

We propose a novel generative model for time series based on Schrödinger bridge (SB) approach. This consists in the entropic interpolation via optimal transport between a reference probability measure on path space and a target measure consistent with the joint data distribution of the time series. The solution is characterized by a stochastic differential equation on finite horizon with a path-dependent drift function, hence respecting the temporal dynamics of the time series distribution. We estimate the drift function from data samples by nonparametric, e.g. kernel regression methods, and the simulation of the SB diffusion yields new synthetic data samples of the time series. The performance of our generative model is evaluated through a series of numerical experiments. First, we test with autoregressive models, a GARCH Model, and the example of fractional Brownian motion, and measure the accuracy of our algorithm with marginal, temporal dependencies metrics, and predictive scores. Next, we use our SB generated synthetic samples for the application to deep hedging on real-data sets.

Posted January 22, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Dante Kalise, Imperial College

Feedback Control Synthesis for Interacting Particle Systems across Scales

This talk focuses on the computational synthesis of optimal feedback controllers for interacting particle systems operating at different scales. In the first part, we discuss the construction of control laws for large-scale microscopic dynamics by supervised learning methods, tackling the curse of dimensionality inherent in such systems. Moving forward, we integrate the microscopic feedback law into a Boltzmann-type equation, bridging controls at microscopic and mesoscopic scales, allowing for near-optimal control of high-dimensional densities. Finally, in the framework of mean field optimal control, we discuss the stabilization of nonlinear Fokker-Planck equations towards unstable steady states via model predictive control.

Posted February 12, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Note the Special Earlier Seminar Time For Only This Week. This is a Zoom Seminar. Zoom (click here to join)
Antoine Girard, Laboratoire des Signaux et Systèmes
CNRS Bronze Medalist, IEEE Fellow, and George S. Axelby Outstanding Paper Awardee

Switched Systems with Omega-Regular Switching Sequences: Application to Switched Observer Design

In this talk, I will present recent results on discrete-time switched linear systems. We consider systems with constrained switching signals where the constraint is given by an omega-regular language. Omega-regular languages allow us to specify fairness properties (e.g., all modes have to be activated an infinite number of times) that cannot be captured by usual switching constraints given by dwell-times or graph constraints. By combining automata theoretic techniques and Lyapunov theory, we provide necessary and sufficient conditions for the stability of such switched systems. In the second part of the talk, I will present an application of our framework to observer design of switched systems that are unobservable for arbitrary switching. We establish a systematic and "almost universal" procedure to design observers for discrete-time switched linear systems. This is joint work with Georges Aazan, Luca Greco and Paolo Mason.

Posted January 22, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Boris Kramer, University of California San Diego

Scalable Computations for Nonlinear Balanced Truncation Model Reduction

Nonlinear balanced truncation is a model order reduction technique that reduces the dimension of nonlinear systems on nonlinear manifolds and preserves either open- or closed-loop observability and controllability aspects of the nonlinear system. Two computational challenges have so far prevented its deployment on large-scale systems: (a) the computation of Hamilton-Jacobi-(Bellman) equations that are needed for characterization of controllability and observability aspects, and (b) efficient model reduction and reduced-order model (ROM) simulation on the resulting nonlinear balanced manifolds. We present a novel unifying and scalable approach to balanced truncation for large-scale control-affine nonlinear systems that consider a Taylor-series based approach to solve a class of parametrized Hamilton-Jacobi-Bellman equations that are at the core of balancing. The specific tensor structure for the coefficients of the Taylor series (tensors themselves) allows for scalability up to thousands of states. Moreover, we will present a nonlinear balance-and-reduce approach that finds a reduced nonlinear state transformation that balances the system properties. The talk will illustrate the strength and scalability of the algorithm on several semi-discretized nonlinear partial differential equations, including a nonlinear heat equation, vibrating beams, Burgers' equation and the Kuramoto-Sivashinsky equation.

Posted January 27, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Sergey Dashkovskiy , Julius-Maximilians-Universität Würzburg

Stability Properties of Dynamical Systems Subjected to Impulsive Actions

We consider several approaches to study stability and instability properties of infinite dimensional impulsive systems. The approaches are of Lyapunov type and provide conditions under which an impulsive system is stable. In particular we will cover the case, when discrete and continuous dynamics are not stable simultaneously. Also we will handle the case when both the flow and jumps are stable, but the overall system is not. We will illustrate these approaches by means of several examples.

Posted January 6, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Madalena Chaves, Centre Inria d'Université Côte d'Azur

Coupling, Synchronization Dynamics, and Emergent Behavior in a Network of Biological Oscillators

Biological oscillators often involve a complex network of interactions, such as in the case of circadian rhythms or cell cycle. Mathematical modeling and especially model reduction help to understand the main mechanisms behind oscillatory behavior. In this context, we first study a two-gene oscillator using piecewise linear approximations to improve the performance and robustness of the oscillatory dynamics. Next, motivated by the synchronization of biological rhythms in a group of cells in an organ such as the liver, we then study a network of identical oscillators under diffusive coupling, interconnected according to different topologies. The piecewise linear formalism enables us to characterize the emergent dynamics of the network and show that a number of new steady states is generated in the network of oscillators. Finally, given two distinct oscillators mimicking the circadian clock and cell cycle, we analyze their interconnection to study the capacity for mutual period regulation and control between the two reduced oscillators. We are interested in characterizing the coupling parameter range for which the two systems play the roles "controller-follower".

Posted January 17, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Tobias Breiten, Technical University of Berlin

On the Approximability of Koopman-Based Operator Lyapunov Equations

Computing the Lyapunov function of a system plays a crucial role in optimal feedback control, for example when the policy iteration is used. This talk will focus on the Lyapunov function of a nonlinear autonomous finite-dimensional dynamical system which will be rewritten as an infinite-dimensional linear system using the Koopman operator. Since this infinite-dimensional system has the structure of a weak-* continuous semigroup in a specially weighted Lp-space one can establish a connection between the solution of an operator Lyapunov equation and the desired Lyapunov function. It will be shown that the solution to this operator equation attains a rapid eigenvalue decay, which justifies finite rank approximations with numerical methods. The usefulness for numerical computations will also be demonstrated with two short examples. This is joint work with Bernhard Höveler (TU Berlin).

Posted January 16, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Jorge Poveda, University of California, San Diego
Donald P. Eckman, NSF CAREER, and AFOSR Young Investigator Program Awardee

Multi-Time Scale Hybrid Dynamical Systems for Model-Free Control and Optimization

Hybrid dynamical systems, which combine continuous-time and discrete-time dynamics, are prevalent in various engineering applications such as robotics, manufacturing systems, power grids, and transportation networks. Effectively analyzing and controlling these systems is crucial for developing autonomous and efficient engineering systems capable of real-time adaptation and self-optimization. This talk will delve into recent advancements in controlling and optimizing hybrid dynamical systems using multi-time scale techniques. These methods facilitate the systematic incorporation and analysis of both "exploration and exploitation" behaviors within complex control systems through singular perturbation and averaging theory, resulting in a range of provably stable and robust algorithms suitable for model-free control and optimization. Practical engineering system examples will be used to illustrate these theoretical tools.

Posted April 29, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Giovanni Fusco, Università degli Studi di Padova

A Lie-Bracket-Based Notion of Stabilizing Feedback in Optimal Control

With reference to an optimal control problem where the state has to asymptotically approach a closed target while paying a non-negative integral cost, we propose a generalization of the classical dissipative relation that defines a control Lyapunov function by a weaker differential inequality. The latter involves both the cost and the iterated Lie brackets of the vector fields in the dynamics up to a certain degree $k\ge 1$, and we call any of its (suitably defined) solutions a degree-k minimum restraint function. We prove that the existence of a degree-k minimum restraint function allows us to build a Lie-bracket-based feedback which sample stabilizes the system to the target while regulating (i.e., uniformly bounding) the cost.