Calendar

Time interval: Events:

Friday, September 24, 2021

Posted September 21, 2021

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request password)

Michael Malisoff, LSU Roy P. Daniels Professor
Event-Triggered Control Using a Positive Systems Approach

Control systems are a class of dynamical systems that contain forcing terms. When control systems are used in engineering applications, the forcing terms can represent forces that can be applied to the systems. Then the feedback control problem consists of finding formulas for the forcing terms, which are functions that can depend on the state of the systems, and which ensure a prescribed qualitative behavior of the dynamical systems, such as global asymptotic convergence towards an equilibrium point. Then the forcing terms are called feedback controls. Traditional feedback control methods call for continuously changing the feedback control values, or changing their values at a sequence of times that are independent of the state of the control systems. This can lead to unnecessarily frequent changes in control values, which can be undesirable in engineering applications. This motivated the development of event-triggered control, whose objective is to find formulas for feedback controls whose values are only changed when it is essential to change them in order to achieve a prescribed system behavior. This talk summarizes the speaker's recent research on event-triggered control theory and applications in marine robotics, which is collaborative with Corina Barbalata, Zhong-Ping Jiang, and Frederic Mazenc. The talk will be understandable to those familiar with the basic theory of ordinary differential equations. No prerequisite background in systems and control will be needed to understand and appreciate this talk.

Friday, October 8, 2021

Posted September 28, 2021
Last modified October 26, 2021

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request password)

Magnus Egerstedt, University of California, Irvine Stacey Nicholas Dean of Engineering, IEEE Fellow, IFAC Fellow
Constraint-Based Control Design for Long Duration Autonomy

When robots are to be deployed over long time scales, optimality should take a backseat to “survivability”, i.e., it is more important that the robots do not break or completely deplete their energy sources than that they perform certain tasks as effectively as possible. For example, in the context of multi-agent robotics, we have a fairly good understanding of how to design coordinated control strategies for making teams of mobile robots achieve geometric objectives, such as assembling shapes or covering areas. But, what happens when these geometric objectives no longer matter all that much? In this talk, we consider this question of long duration autonomy for teams of robots that are deployed in an environment over a sustained period of time and that can be recruited to perform a number of different tasks in a distributed, safe, and provably correct manner. This development will involve the composition of multiple barrier certificates for encoding tasks and safety constraints through the development of non-smooth barrier functions, as well as a detour into ecology as a way of understanding how persistent environmental monitoring can be achieved by studying animals with low-energy life-styles, such as the three-toed sloth. Biography of Magnus Egerstedt.

Friday, October 15, 2021

Posted October 5, 2021
Last modified October 25, 2021

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request passcode)

Alberto Bressan, Penn State Eberly Family Chair Professor
Optimal Control of Propagation Fronts and Moving Sets

We consider a controlled reaction-diffusion equation, modeling the spreading of an invasive population. Our goal is to derive a simpler model, describing the controlled evolution of a contaminated set. The first part of the talk will focus on the optimal control of 1-dimensional traveling wave profiles. Using Stokes' formula, explicit solutions are obtained, which in some cases require measure-valued optimal controls. In turn, this leads to a family of optimization problems for a moving set, related to the original parabolic problem via a sharp interface limit. In connection with moving sets, in the second part of the talk I will present some results on controllability, existence of optimal strategies, and necessary conditions. Examples of explicit solutions and several open questions will be also discussed. This is a joint research with Maria Teresa Chiri and Najmeh Salehi.

Friday, October 22, 2021

Posted September 21, 2021
Last modified October 11, 2021

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request passcode)

Ilya Kolmanovsky, University of Michigan IEEE Fellow, AACC Eckman Awardee
Reference Governors for Control of Systems with Constraints

As systems are downsized and performance requirements become more stringent, there is an increasing need for methods that are able to enforce state and control constraints as a part of the control design. The constraints can represent actuator range and rate limits, safety and comfort limits, and obstacle avoidance requirements. Reference governors are add-on supervisory algorithms that monitor and, if necessary, modify commands that are passed to the nominal controller/closed-loop system to ensure that pointwise-in-time state and control constraints are not violated. Approaches to the construction of reference governors will be described along with the supporting theory. Recent extensions of reference governors, such as a controller state and reference governor (CSRG) that in addition to modifying references can reset the controller states, and opportunities for the application of reference governors to ensure feasibility of model predictive controllers, will be discussed. The learning reference governor, which integrates learning into the reference governor operation, to handle constraints in uncertain systems, will also be touched upon. The potential for the practical applications of reference governors will be illustrated with several examples.

Friday, October 29, 2021

Posted August 25, 2021
Last modified October 26, 2021

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request passcode)

Kyriakos Vamvoudakis, Georgia Institute of Technology
Learning-Based Actuator Placement and Receding Horizon Control for Security against Actuation Attacks

Cyber-physical systems (CPS) comprise interacting digital, analog, physical, and human components engineered for function through integrated physics and logic. Incorporating intelligence in CPS, however, makes their physical components more exposed to adversaries that can potentially cause failure or malfunction through actuation attacks. As a result, augmenting CPS with resilient control and design methods is of grave significance, especially if an actuation attack is stealthy. Towards this end, in the first part of the talk, I will present a receding horizon controller, which can deal with undetectable actuation attacks by solving a game in a moving horizon fashion. In fact, this controller can guarantee stability of the equilibrium point of the CPS, even if the attackers have an information advantage. The case where the attackers are not aware of the decision-making mechanism of one another is also considered, by exploiting the theory of bounded rationality. In the second part of the talk, and for CPS that have partially unknown dynamics, I will present an online actuator placement algorithm, which chooses the actuators of the CPS that maximize an attack security metric. It can be proved that the maximizing set of actuators is found in finite time, despite the CPS having uncertain dynamics. Biography of Kyriakos Vamvoudakis.

Friday, November 5, 2021

Posted September 27, 2021
Last modified November 3, 2021

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request passcode)

Silviu-Iulian Niculescu, Laboratoire des Signaux et Systèmes (L2S)
Delays in Interconnected Dynamical Systems: A Qualitative Analysis

It is well-known that interconnections of two or more dynamical systems lead to an increasing complexity of the overall systems’ behavior, due to the effects induced by the emerging dynamics (which may include feedback loops) in significant interactions (involving sensing and communication) with environmental changes. One of the major problems appearing in such interconnection schemes is related to the propagation, transport, and communication of delays acting through, and inside, the interconnections. The aim of this talk is to briefly present user-friendly methods and techniques (based in part on frequency-domain approaches) for the analysis and control of dynamical systems in the presence of delays. The presentation is as simple as possible, focusing on the main intuitive (and algebraic and geometric) ideas to develop theoretical results, and their potential use in practical applications. Single and multiple delays will be considered. The talk ends with illustrative examples.

Friday, November 12, 2021

Posted August 18, 2021
Last modified October 31, 2021

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request password)

Kirsten Morris, University of Waterloo IEEE Fellow, SIAM Fellow
Optimal Controller and Actuator Design for Partial Differential Equations

Control can be very effective in altering dynamics. One issue for partial differential equations is that performance depends not only on the controller, but also on its location and spatial design. Existence of a concurrent optimal controller and spatial distribution has been established for several classes of partial differential equations and objectives. Some of these results will be discussed and illustrated with examples.

Friday, November 19, 2021

Posted September 20, 2021
Last modified November 12, 2021

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request passcode)

Sonia Martinez, University of California, San Diego IEEE Fellow
Data-Driven Dynamic Ambiguity Sets: Precision Tradeoffs under Noisy Measurements

Stochastic and robust optimization constitute natural frameworks to solve decision-making and control problems subject to uncertainty. However, these fall short in addressing real-world scenarios for which models of the uncertainty are not available. Data-driven approaches can be of help to approximate such models, but typically require large amounts of data in order to produce performance-guaranteed results. Motivated by settings where the collection of data is costly and fast decisions need to be made online, we present recent work on the construction of dynamic ambiguity sets for uncertainties that evolves according to a dynamical law. In particular, we characterize the tradeoffs between the amount of progressively assimilated data and its future adequacy, due to its gradual precision loss in its predicted values.

Friday, December 3, 2021

Posted September 8, 2021
Last modified October 11, 2021

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request passcode)

Jorge Cortes, University of California, San Diego IEEE Fellow, SIAM Fellow
Resource-Aware Control and Coordination of Cyberphysical Systems

Trading computation and decision making for less communication, sensing, or actuator effort offers great promise for the autonomous operation of both individual and interconnected cyberphysical systems. Resource-aware control seeks to prescribe, in a principled way, when to use the available resources efficiently while still guaranteeing a desired quality of service in performing the intended task. This talk describes advances of this paradigm along three interconnected thrusts: the design of triggering criteria that balance the trade-offs among performance, efficiency, and implementability; the synthesis of distributed triggers in network systems that can be evaluated by individual agents; and the benefits of flexibly interpreting what constitutes a resource. Throughout the presentation, we illustrate our discussion with applications to stabilization under information constraints, opportunistic actuation of safety-critical systems, and information exchanges in the coordination of multi-agent systems.

Friday, December 10, 2021

Posted September 27, 2021
Last modified October 11, 2021

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am https://lsu.zoom.us/j/94269991036 (Click "Questions or comments?" to request passcode)

Yacine Chitour, Laboratoire des Signaux et Systèmes (L2S)
Worst Exponential Decay Rate for Degenerate Gradient Flows Subject to Persistency of Excitation

In this talk, I will present results for the estimation of the worst rate of exponential decay of degenerate gradient flows $\dot x = −Sx$, issued from adaptive control theory. Under persistent excitation assumptions on the positive semi-definite matrix $S$, upper bounds for this rate of decay consistent with previously known lower bounds are provided and analogous stability results for more general classes of persistently excited signals. The strategy of proof consists in relating the worst decay rate to optimal control questions and studying in detail their solutions. As a byproduct of our analysis, estimates for the worst $L_2$-gain of the time-varying linear control systems $\dot x = −cc^{\scriptscriptstyle T}x$ are obtained, where the signal $c$ is persistently excited. This is a joint work with Paolo Mason and Dario Prandi.

Friday, January 21, 2022

Posted January 13, 2022
Last modified January 17, 2022

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)

Joel Rosenfeld, University of South Florida
Dynamic Mode Decompositions for Control Affine Systems

We will review the machine learning technique of dynamic mode decomposition (or DMD) for continuous time systems and show how this may be extended to produce models for the state of an unknown control-affine systems using trajectory data. Trajectory data in this setting comes as a pair of control signals and the corresponding control trajectory, and the DMD method for control-affine systems enables the prediction of the action of the system in response to a previously unobserved control signal. This will require a discussion of reproducing kernel Hilbert spaces (or RKHSs), vector valued RKHSs, control Liouville operators, and multiplication operators.

Friday, February 11, 2022

Posted February 3, 2022

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am Zoom (Click "Questions or comments?" to request Zoom link)

Michele Palladino, University of L’Aquila
Optimal Control of the Moreau’s Sweeping Process

We present recent and new results on the optimal control of Moreau’s sweeping process (SP). We will present a novel approach for proving a version of the Pontryagin Maximum Principle in a general setting. Such an approach exploits a kind of small-time local controllability property which the SP dynamics naturally satisfies in a neighborhood of the moving constraint. Open problems and further research directions will be extensively discussed.

Friday, February 18, 2022

Posted January 13, 2022

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am Zoom (Click "Questions or comments?" to request Zoom link)

Emmanuel Trelat, Sorbonne Universite, Paris, France
On the Turnpike Property

The turnpike property was discovered in the 50's by Nobel prize winner Samuelson in economics. It stipulates that the optimal trajectory of an optimal control problem in large time remains essentially close to a steady state, itself being the optimal solution of an associated static optimal control problem. We have established the turnpike property for general nonlinear finite and infinite dimensional optimal control problems, showing that the optimal trajectory is, except at the beginning and the end of the time interval, exponentially close to some (optimal) stationary state, and that this property also holds for the optimal control and for the adjoint vector coming from the Pontryagin maximum principle. We prove that the exponential turnpike property is due to a hyperbolicity phenomenon which is intrinsic to the symplectic feature of the extremal equations. We infer a simple and efficient numerical method to compute optimal trajectories in that framework, in particular an appropriate variant of the shooting method. The turnpike property turns out to be ubiquitous and the turnpike set may be more general than a single steady-state, like for instance a periodic trajectory. We also show the shape turnpike property for PDE models in which a subdomain evolves in time according to some optimization criterion. These works are in collaboration with Gontran Lance, Can Zhang, and Enrique Zuazua.

Friday, February 25, 2022

Posted February 8, 2022
Last modified February 21, 2022

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am Zoom (Click "Questions or comments?" to request Zoom link)

Cameron Nowzari, George Mason University
Implementable Event-Triggered Controllers for Networked Cyber-Physical Systems

Rapid development of technology is quickly leading us to an increasingly networked and wireless world. With massive wireless networks on the horizon, the efficient coordination of such large networks becomes an important consideration. To efficiently use the available resources, it is desirable to limit wireless communication to only the instances when the individual subsystems actually need attention. Unfortunately, classical time-triggered control systems are based on performing sensing, actuation, and even communication actions periodically in time rather than when it is necessary. This motivates the need to transcend this prevailing paradigm in exchange for event-triggered control (ETC); where individual subsystems must decide for themselves when to take different actions based on local information. The concept of ETC has been proposed as early as the 1960's but now we are starting to see practical applications. Since then, the idea of ETC has been surging in popularity to now essentially stand alone in the area of systems and control. This then begs the question: why is ETC not yet more mainstream and why has industry still not adopted it in most actual control systems? In this talk we look at this question and argue that the majority of ETC algorithms being proposed today are too theoretical to be useful. We then show how we are addressing this problem by developing a standard set of tools and methodologies for co-designing efficient event-triggered communication and control algorithms for networked systems that can actually be used by practitioners; with quantifiable benefits, performance guarantees, and robustness properties. This talk identifies numerous shortcomings between theoretical concepts and what is actually needed in practice for the theory to be useful, and discusses how we might close this gap. Finally, this talk will cover specific challenges we encountered in applying the state of-the-art event-triggered control algorithms to a wireless clock synchronization problem, and how we overcame them.

Friday, March 4, 2022

Posted February 3, 2022
Last modified February 24, 2022

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)

Pauline Bernard, MINES ParisTech
Observer Design for Continuous-Time Dynamical Systems

We review the main techniques of state observer design for continuous-time dynamical systems. Starting from necessary conditions for the existence of such asymptotic observers, we classify the available methods depending on the detectability/observability assumptions they require. We show how each class of observer relies on transforming the system dynamics into a particular normal form which allows the design of an observer, and how each observability condition guarantees the invertibility of its associated transformation and the convergence of the observer. A particular focus will be given to the promising theory of KKL or nonlinear Luenberger observers.

Friday, March 11, 2022

Posted January 29, 2022
Last modified February 1, 2022

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am Zoom (Click "Questions or comments?" to request Zoom link)

Luc Jaulin, ENSTA-Bretagne
Interval Contractors to Solve Dynamical Geometrical Equations with Application to Underwater Robotics

In Euclidean space, the separation between distinct points corresponds to their distance and is purely spatial and positive. In space-time, the separation between events takes into account not only spatial separation between the events, but also their temporal separation. We will consider problems involving geometrical constraints in space-time in an underwater robotics context. The motion of the robots will be described by differential equations, and the clocks attached to each robot are not synchronized. An interval contractor based technique is used to solve the distributed state estimation problem. The method is illustrated on the localization of a group of underwater robots with unsynchronized clocks. In this problem, the travel time of the sound that gives us the distances between robots cannot be neglected.

Friday, March 25, 2022

Posted January 28, 2022
Last modified March 9, 2022

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)

Anton Selivanov, University of Sheffield, UK
Time-Delay Implementation of Derivative-Dependent Control

Time delays in input or output channels often lead to instability and, therefore, are usually avoided. However, there are systems where delays have a stabilizing effect. This happens because time-delays allow one to approximate output derivatives and use them in the feedback law. In this talk, I will consider an LTI system that can be stabilized using only output derivatives. The derivatives are approximated by finite differences, leading to time-delayed feedback. I will present a method for designing and analyzing such feedback under continuous-time and sampled measurements. It will be shown that, if the derivative-dependent control exponentially stabilizes the system, then its time-delayed approximation stabilizes the system with the same decay rate provided the time delay is small enough.

Friday, April 1, 2022

Posted January 13, 2022
Last modified February 21, 2022

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am Zoom (Click "Questions or comments?" to request Zoom link)

Pierdomenico Pepe, University of L'Aquila
Sampled-Data Event-Based Stabilization of Retarded Nonlinear Systems

We present an event-based controller for the stabilization of nonlinear retarded systems. The main features of the controller we provide are that (i) only sampled-data measures of the Euclidean internal variable are needed, thus avoiding continuous-time monitoring of the state in infinite dimensional spaces, ii) the event function is only evaluated at sampling instants, and involves a finite number of most recent measures, and iii) discontinuous feedbacks and non- uniform sampling are allowed. The controller guarantees semi-global practical asymptotic stability to an arbitrarily small final target ball around the origin, by suitably fast sampling.

Friday, April 8, 2022

Posted January 31, 2022
Last modified April 6, 2022

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am Zoom (Click "Questions or comments?" to request Zoom link)

Franco Rampazzo, Dipartimento di Matematica Pura ed Applicata, Università degli Studi di Padova
Goh and Legendre-Clebsch Conditions for Non-Smooth Optimal Control Problems

Various generalizations of the original Maximum Principle (Pontryagin et al., 1956) have been produced in different theoretical frameworks in the literature, starting from the pioneering works of F. Clarke in the 1970s up to recent papers. For an end-point constrained optimal control problem with control affine dynamics, I will present ideas (from a work in progress with F. Angrisani) in the direction of adding higher order necessary conditions to the Maximum Principle. In particular, one can generalize the classical Goh condition and the Legendre-Clebsch condition (which include Lie brackets) to the case where the data are nonsmooth. In fact, the recently introduced notion of Quasi Differential Quotient (Palladino and R., 2020) allows one to treat two simultaneous kinds of non-smoothness, namely the one concerning the adjoint inclusion and the one connected with the set-valued Lie brackets (R. and Sussmann 2001), within the same framework.

Friday, April 22, 2022

Posted February 6, 2022
Last modified February 18, 2022

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)

Xiaobo Tan, Michigan State University MSU Foundation Professor & Richard M. Hong Endowed Chair in Electrical Engineering
Modeling and Control of Hysteresis Using Minimal Representations

Hysteresis remains a key nonlinearity in magnetic and smart material actuators that challenges their control performance. High-fidelity modeling and effective compensation of hysteresis, yet with low computational complexity, are of immense interest. In this talk I will share some recent advances in this direction via several examples. First, I will present the optimal reduction problem for a Prandtl-Ishlinskii (PI) operator, one of the most popular hysteresis models, where an optimal approximation of the original operator with fewer constituent elements (play operators) is obtained via efficient dynamic programming. Second, I will discuss adaptive estimation of play radii, instead of their weights, as an alternative means for accurate modeling of hysteresis with a PI operator of low complexity. Finally, I will report a dynamic inversion approach to hysteresis compensation that requires minimal, qualitative conditions on the system model. Throughout the talk I will use experimental results from smart materials to illustrate the methods.

Friday, April 29, 2022

Posted January 27, 2022
Last modified February 15, 2022

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am Zoom (Click "Questions or comments?" to request Zoom link)

Andrea Serrani, Ohio State University Editor-in-Chief of IEEE-TCST, Interim Chair of Department of ECE
Adaptive Feedforward Compensation of Harmonic Disturbances for Convergent Nonlinear Systems

Rejecting periodic disturbances occurring in dynamical systems is a fundamental problem in control theory, with numerous technological applications such as control of vibrating structures, active noise control, and control of rotating mechanisms. From a theoretical standpoint, any design philosophy aimed at solving this problem reposes upon a specific variant of the internal model principle, which states that regulation can be achieved only if the controller embeds a copy of the exogenous system generating the periodic disturbance. In the classic internal model control (IMC), the plant is augmented with a replica of the exosystem, and the design is completed by a unit which provides stability of the closed loop. In a somewhat alternative design methodology, referred to as adaptive feedforward compensation (AFC), a stabilizing controller for the plant is computed first and then an observer of the exosystem is designed to provide asymptotic cancelation of the disturbance at the plant input. In particular, the parameters of the feedforward control are computed adaptively by means of pseudo-gradient optimization, using the regulated error as a regressor. Contrary to IMC, which has been the focus of extensive investigation, application of AFC methods to nonlinear systems has remained so far elusive. This talk aims at presenting results that set the stage for a theory of AFC for nonlinear systems by providing a nonlinear equivalent of the condition for the solvability of the problem in the linear setting, and by re-interpreting classical linear schemes in a fully nonlinear setting. To this end, the problem is approached by combining methods from output regulation theory with techniques for semi-global stabilization.

Friday, May 6, 2022

Posted January 26, 2022

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am Zoom (Click "Questions or comments?" to request Zoom link)

Sophie Tarboureich, Laboratoire d'Analyse et d'Architecture des Systèmes (LAAS), France
Algorithms for Event-Triggered Control

Event-triggered control consists of devising event-triggering mechanisms leading to only seldom control updates. In the context of event-triggered control, two objectives that can be pursued are (1) emulation, whereby the controller is a priori predesigned and only the event-triggering rules have to be designed and (2) co-design, where the joint design of the control law and the event-triggering conditions has to be performed. Control systems are connected to generic digital communication networks for implementation, transmission, coding, or decoding. Therefore, event-triggered control strategies have been developed to cope with communication, energy consumption, and computation constraints. The talk is within this scope. Considering linear systems, the design of event-triggering mechanisms using local information is described through linear matrix inequality (or LMI) conditions. From these conditions, the asymptotic stability of the closed loop system, together with the avoidance of Zeno behavior, are ensured. Convex optimization problems are studied to determine the parameters of the event-triggering rule with the goal of reducing the number of control updates.

Friday, May 13, 2022

Posted February 2, 2022

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)

Monica Motta, Università di Padova, Italy
Stabilizability in Optimal Control

We address the stabilizability of nonlinear control systems, in an optimal control theoretic framework. First, we extend the nowadays classical concepts of sampling and Euler solutions that were developed by F. Clarke, Y. Ledyaev, E. Sontag, A. Subbotin, and R.J. Stern (1997, 2000) for control systems associated to discontinuous feedbacks, by also considering corresponding costs, given by integrals of a nonnegative Lagrangian. In particular, we introduce the notions of sample and Euler stabilizability to a closed target set with regulated cost, which require the existence of a stabilizing feedback that keeps the cost of all sampling and Euler solutions starting from the same point below the same level. Then, under mild regularity hypotheses on the dynamics and on the Lagrangian, we prove that the existence of a special control Lyapunov function, to which we refer to as a minimum restraint function (or MRF), implies not only stabilizability, but also that all sample and Euler stabilizing trajectories have regulated costs. The proof is constructive, being based on the synthesis of appropriate feedbacks derived from the MRF. As in the case of classical control Lyapunov functions, this construction requires that the MRF is locally semiconcave. However, by generalizing an earlier result by L. Rifford (2000) we establish that it is possible to trade regularity assumptions on the data with milder regularity assumptions on the MRF. In particular, we show that if the dynamics and the Lagrangian are locally Lipschitz up to the boundary of the target, then the existence of a mere locally Lipschitz MRF provides sample and Euler stabilizability with regulated cost. This talk is based on a joint work with Anna Chiara Lai (from University Roma La Sapienza, Rome, Italy), which is part of an ongoing, wider investigation of global asymptotic controllability and stabilizability from an optimal control perspective.

Friday, November 4, 2022

Posted June 12, 2022

Control and Optimization Seminar Questions or comments?

9:30 am - 10:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)

Naomi Leonard, Princeton University MacArthur Fellow, and Fellow of ASME, IEEE, IFAC, and SIAM.
TBA