Calendar

Time interval: Events:

Today, Thursday, March 28, 2024

Posted March 19, 2024

Computational Mathematics Seminar

3:30 pm Digital Media Center 1034

Yue Yu, Lehigh University
Nonlocal operator is all you need

During the last 20 years there has been a lot of progress in applying neural networks (NNs) to many machine learning tasks. However, their employment in scientific machine learning with the purpose of learning physics of complex system is less explored. Differs from the other machine learning tasks such as the computer vision and natural language processing problems where a large amount of unstructured data are available, physics-based machine learning tasks often feature scarce and structured measurements. In this talk, we will take the learning of heterogeneous material responses as an exemplar problem, to investigate the design of neural networks for physics-based machine learning. In particular, we propose to parameterize the mapping between loading conditions and the corresponding system responses in the form of nonlocal neural operators, and infer the neural network parameters from high-fidelity simulation or experimental measurements. As such, the model is built as mappings between infinite-dimensional function spaces, and the learnt network parameters are resolution-agnostic: no further modification or tuning will be required for different resolutions in order to achieve the same level of prediction accuracy. Moreover, the nonlocal operator architecture also allows the incorporation of intrinsic mathematical and physics knowledge, which improves the learning efficacy and robustness from scarce measurements. To demonstrate the applicability of our nonlocal operator learning framework, three typical scenarios in physics-based machine learning will be discussed: the learning of a material-specific constitutive law, the learning of an efficient PDE solution operator, and the development of a foundational constitutive law across multiple materials. As an application, we learn material models directly from digital image correlation (DIC) displacement tracking measurements on a porcine tricuspid valve leaflet tissue, and show that the learnt model substantially outperforms conventional constitutive models.

Monday, April 1, 2024

Posted January 22, 2024
Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)

Boris Kramer, University of California San Diego
Scalable Computations for Nonlinear Balanced Truncation Model Reduction

Nonlinear balanced truncation is a model order reduction technique that reduces the dimension of nonlinear systems on nonlinear manifolds and preserves either open- or closed-loop observability and controllability aspects of the nonlinear system. Two computational challenges have so far prevented its deployment on large-scale systems: (a) the computation of Hamilton-Jacobi-(Bellman) equations that are needed for characterization of controllability and observability aspects, and (b) efficient model reduction and reduced-order model (ROM) simulation on the resulting nonlinear balanced manifolds. We present a novel unifying and scalable approach to balanced truncation for large-scale control-affine nonlinear systems that consider a Taylor-series based approach to solve a class of parametrized Hamilton-Jacobi-Bellman equations that are at the core of balancing. The specific tensor structure for the coefficients of the Taylor series (tensors themselves) allows for scalability up to thousands of states. Moreover, we will present a nonlinear balance-and-reduce approach that finds a reduced nonlinear state transformation that balances the system properties. The talk will illustrate the strength and scalability of the algorithm on several semi-discretized nonlinear partial differential equations, including a nonlinear heat equation, vibrating beams, Burgers' equation and the Kuramoto-Sivashinsky equation.

Monday, April 1, 2024

Posted March 26, 2024

Applied Analysis Seminar Questions or comments?

3:30 pm Lockett Hall 233

Wei Li, DePaul University
TBA

Tuesday, April 2, 2024

Posted November 14, 2023
Last modified March 26, 2024

Algebra and Number Theory Seminar Questions or comments?

3:20 pm – 4:10 pm Lockett 233 or click here to attend on Zoom

Micah Milinovich, University of Mississippi
Biases in the gaps between zeros of Dirichlet L-functions

We describe a family of Dirichlet L-functions that provably have unusual value distribution and experimentally have a significant and previously undetected bias in the distribution of gaps between their zeros. This has an arithmetic explanation that corresponds to the nonvanishing of a certain Gauss-type sum. We give a complete classification of the characters for when these sums are nonzero and count the number of corresponding characters. It turns out that this Gauss-type sum vanishes for 100% of primitive Dirichlet characters, so L-functions in our newly discovered family are rare (zero density set amongst primitive characters). If time allows, I will also describe some newly discovered experimental results concerning a "Chebyshev-type" bias in the gaps between the zeros of the Riemann zeta-function. This is joint work with Jonathan Bober (Bristol) and Zhenchao Ge (Waterloo).

Wednesday, April 3, 2024

Posted January 18, 2024

Informal Geometry and Topology Seminar Questions or comments?

1:30 pm Lockett 233

Huong Vo, Louisiana State University
TBA

Wednesday, April 3, 2024

Posted December 1, 2023
Last modified March 18, 2024

Geometry and Topology Seminar Seminar website

3:30 pm Lockett 233

Neal Stoltzfus, Mathematics Department, LSU
TBA