Calendar

Time interval: Events:

Friday, February 22, 2008

Posted February 11, 2008
Last modified February 17, 2008

Frontiers of Scientific Computing Lecture Series

11:30 am – 12:30 pm Johnston 338

William Gropp, Mathematics and Computer Science Division, Argone National Laboratory
Challenges For The Message-Passing Interface In The PetaFLOPS Era

There will be a reception at 11:00AM. More info.

Wednesday, April 2, 2008

Posted March 25, 2008
Last modified March 2, 2021

Frontiers of Scientific Computing Lecture Series

1:30 pm 338 Johnston Hall

Alex Pothen, Old Dominion University Professor, Computer Science Department and Center for Computational Science
Combinatorial Algorithms Enabling Computational Science and Engineering

Combinatorial problems arise as critical subproblems in many computational simulations in science and engineering. Combinatorial scientific computing (CSC) is a multi-disciplinary area in which such problems are formulated and solved. The CSCAPES Institute has been established with funding from the problems and thereby enable high performance computing for breakthrough science.

Friday, April 18, 2008

Posted April 7, 2008

Frontiers of Scientific Computing Lecture Series

11:30 am – 12:30 pm

Tim Warburton, Rice University Assistant Professor, Department of Computational and Applied Math
Advances In Wave Propagation With The Discontinuous Galerkin Method

More informations

Friday, October 3, 2008

Posted September 25, 2008

Frontiers of Scientific Computing Lecture Series

11:30 am – 12:30 pm 338 Johnston Hall

George Karniadakis, Brown University
Multiscale Modeling of the Human Arterial Tree on the Teragrid

http://www.cct.lsu.edu/events/talks/423

Monday, February 2, 2009

Posted January 13, 2009

Frontiers of Scientific Computing Lecture Series

2:00 pm – 3:00 pm Johnston Hall 338

Jack Dongarra, University Of Tennessee And Oak Ridge National Laboratory Member, National Academy of Engineering
An Overview Of High Performance Computing And Challenges For The Future

In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. A new generation of software libraries and algorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder. We will focus on the redesign of software to fit multicore architectures. Additional details can be found at http://www.cct.lsu.edu/events/talks/449

Monday, February 16, 2009

Posted November 24, 2008
Last modified January 13, 2009

Frontiers of Scientific Computing Lecture Series

11:00 am – 12:00 pm Johnston Hall 338

Marsha Berger, Courant Institute Member, National Academy of Science and National Academy of Engineering
Computing Fluid Flows In Complex Geometry

We give an overview of the difficulties in simulating fluid flow in complex geometry. The principal approaches use either overlapping or patched body-fitted grdis, unstructured grids, or Cartesian (non-body-fitted) grids, with our work focusing on the latter. Cartesian methods have the advantage that no explicit mesh generation is needed, greatly reducing the human effort involved in complex flow computations. However it is a challenge to find stable and accurate difference formulas for the irregular Cartesian cells cut by the boundary. We discuss some of the steps involved in preparing for and carrying out a fluid flow simulation in complicated geometry. We present some of the technical issues involved in this approach, including the special discretizations needed to avoid loss of accuracy and stability at the irregular cells, as well as how we obtain highly scalable parallel performance. This method is in routine use for aerodynamic calculations in several organizations, including NASA Ames Research Center. Many open problems are discussed. Additional details can be found at http://www.cct.lsu.edu/events/talks/443

Refreshments will be served at 10:30.

Tuesday, March 31, 2009

Posted March 13, 2009

Frontiers of Scientific Computing Lecture Series

11:00 am – 12:00 pm Johnston Hall 338

Lisa Fauci, Tulane University
Interaction of Elastic Biological Structures with Complex Fluids

Wednesday, April 15, 2009

Posted March 5, 2009

Frontiers of Scientific Computing Lecture Series

1:00 pm – 2:00 pm Johnston Hall 338

Claude Le Bris, ENPC and INRIA, France
Computational Multiscale Mechanics: A Mathematical Perspective

The talk will overview recent progress in the mathematical understanding of numerical approaches coupling an atomistic and a continuum description of matter. The talk is based upon a series of works in collaboration with X. Blanc (Univ Paris 6), F. Legoll (ENPC), P.-L. Lions (College de France). The perspective is mathematical. The purpose is to describe the theoretical tools and concepts that allow for a better foundation of the numerical approaches. It is also to point out some important unsolved mathematical issues.

Thursday, February 4, 2010

Posted November 7, 2009
Last modified January 31, 2010

Frontiers of Scientific Computing Lecture Series

3:30 pm – 4:30 pm 338 Johnston Hall

Max Gunzburger, Florida State University
Color Printers, Mailboxes, Fish, And Homer Simpson Or Centroidal Voronoi Tesselations: Algorithms And Applications

Refreshments at 3pm. Additional details at: http://www.cct.lsu.edu/events/talks/503

Thursday, May 13, 2010

Posted March 25, 2010

Frontiers of Scientific Computing Lecture Series

11:00 am – 12:00 pm 338 Johnston Hall

Leszek Demkowicz, University of Texas at Austin
TBA

Monday, April 25, 2011

Posted February 7, 2011

Frontiers of Scientific Computing Lecture Series

3:30 pm – 4:30 pm 338 Johnston Hall

Mac Hyman, Tulane University
Simple Mathematical Models Can Provide New Insights into Stopping Epidemics

Public health workers are reaching out to mathematical scientists to use disease models to understand, and mitigate, the spread of emerging diseases. Mathematical and computational scientists are needed to create new tools that can anticipate the spread of new diseases and evaluate the effectiveness of different approaches for bringing epidemics under control. Simple epidemic models can be used in the classroom to provide insight into how mathematical sciences can improve the health of our world and save lives. The talk will provide an overview, for general audiences, of what type of insights these models can provide. I will describe some of the mathematical advances needed to create the next generation of models, and share my personal experiences in controlling the spread of HIV/AIDS, SARS, malaria, foot and mouth disease, and the novel H1N1 (swine) flu.

Thursday, January 19, 2012

Posted October 21, 2011
Last modified November 1, 2011

Frontiers of Scientific Computing Lecture Series

3:30 pm – 4:30 pm 338 Johnston Hall

Chi-Wang Shu, Brown University
Maximum-Principle-Satisfying And Positivity-Preserving High Order Discontinuous Galerkin And Finite Volume Schemes

When solving convection dominated partial differential equations, such as the incompressible and compressible Euler equations in fluid dynamics, it is a challenge to design numerical schemes which are both strongly stable and high order accurate, especially when the solution contains sharp gradient regions or discontinuities. Previous schemes satisfying strict maximum principle for scalar equations and positivity-preserving for systems are mostly first order, or at most second order accurate. We construct uniformly high order accurate discontinuous Galerkin (DG) and weighted essentially non-oscillatory (WENO) finite volume (FV) schemes satisfying a strict maximum principle for scalar conservation laws and passive convection in incompressible flows, and positivity preserving for density and pressure for compressible Euler equations. One remarkable property of our approach is that it is straightforward to extend the method to two and higher dimensions on arbitrary triangulations. We will also emphasize recent developments including arbitrary equations of state, source terms, integral terms, shallow water equations, high order accurate finite difference positivity preserving schemes for Euler equations, and a special non-standard positivity preserving high order finite volume scheme for convection-diffusion equations. Numerical tests demonstrating the good performance of the scheme will be reported. This is a joint work with Xiangxiong Zhang. (Additional details at http://www.cct.lsu.edu/events/talks/591)

Thursday, February 2, 2012

Posted December 5, 2011

Frontiers of Scientific Computing Lecture Series

3:30 pm – 4:30 pm 338 Johnston Hall

Ricardo Nochetto, University of Maryland
Modeling, Analysis And Computation Of Biomembranes

We present three models of biomembranes along with their numerical simulation. The first one is purely geometric since the equilibrium shapes are the minimizers of the Willmore (or bending) energy under area and volume constraints. The second model incorporates the effect of the inside (bulk) viscous incompressible fluid and leads to more physical dynamics. The third model describes the interaction of a director field with a membrane, giving rise to an induced spontaneous curvature. We propose a parametric finite element method for the discretization of these models and examine crucial numerical issues such as dealing with curvature and length constraints within a variational framework. We show several simulations describing the dynamics of purely geometric flows, membrane-fluid interaction, and the dramatic effect of defects of the director field on membrane shape. This work is joint with S. Bartels, A. Bonito, G. Dolzmann, M.S. Pauletti, and A. Raisch. Refreshments at 3pm. Further information at http://www.cct.lsu.edu/events/talks/600

Thursday, August 23, 2012

Posted May 3, 2012
Last modified August 11, 2012

Frontiers of Scientific Computing Lecture Series

3:30 pm – 4:30 pm 338 Johnston Hall

Carsten Carstensen, Humboldt Universität zu Berlin
Five Trends in the Mathematical Foundation of Computational PDEs

This presentation concerns five topics in computational partial differential equations: (1) equivalence of first-order methods for the Poisson problem, (2) nonconforming and mixed finite element methods for the Stokes equations and their adaptivity, (3) adaptive methods for elliptic eigenvalue problems, (4) adaptive error control for obstacle problems, and (5) computational microstructures with degenerate convex minimization. The overall goals for the work in these topics are reliable error control and efficient simulation. The presentation will also demonstrate the surprising advantages of nonstandard discretizations over conforming finite element schemes. (Additional details can be found at http://www.cct.lsu.edu/lectures/five-trends-mathematical-foundation-computational-pdes)

Tuesday, March 5, 2013

Posted November 30, 2012
Last modified January 4, 2013

Frontiers of Scientific Computing Lecture Series

3:30 pm – 4:30 pm 338 Johnston Hall

Weinan E, Princeton University
Modeling Rare Events

Many important dynamic processes in physics, biology, chemistry and material science can be viewed as being rare events. These events are difficult to model due to the disparity of the time scales involved. From an abstract viewpoint, this can be formulated as the problem of navigating a system over its energy landscape. We will discuss the theoretical framework for describing such events, as well as the numerical algorithms that have been developed. We will discuss applications to problems that arise in material science, chemistry and fluid mechanics.

Tuesday, March 25, 2014

Posted January 21, 2014

Frontiers of Scientific Computing Lecture Series

3:30 pm – 4:30 pm Digital Media Center

Randy Bank, University of California San Diego
Some Algorithmic Aspects of hp-Adaptive Finite Elements

Abstract: We will discuss our on-going investigation of hp-adaptive finite elements. We will focus on a posteriori error estimates based on superconvergent derivative recovery. Besides providing both global error estimates and local error indicators, this family of error estimates also provides information that forms the basis of our hp-adaptive refinement and coarsening strategies. In particular, these a posteriori error estimates address in a cost efficient and natural way the critical issue of deciding between h or p refinement/coarsening. Some numerical examples will be provided.

Thursday, October 16, 2014

Posted September 8, 2014
Last modified September 17, 2014

Frontiers of Scientific Computing Lecture Series

3:30 pm – 4:30 pm 1034 Digital Media Center

Howard Elman, University of Maryland
Efficient Solution Algorithms for Stochastic Partial Differential Equations

Abstract: We consider new computational methods for solving partial differential equations (PDEs) when components of the problem such as diffusion coefficients or boundary conditions are not known with certainty but instead are represented as random fields. In recent years, several computational techniques have been developed for such models that offer potential for improved efficiencies compared with traditional Monte-Carlo methods. These include stochastic Galerkin methods, which use an augmented weak formulation of the PDE derived from averaging with respect to expected value, and stochastic collocation methods, which use a set of samples relatively small in cardinality that captures the character of the solution space. We give an overview of the relative advantages of these two methods and present efficient computational algorithms for solving the algebraic systems that arise from them. In addition, we show that these algorithms can be combined with techniques of reduced-order modeling to significantly enhance efficiency with essentially no loss of accuracy.

Tuesday, April 14, 2015

Posted March 2, 2015
Last modified March 18, 2015

Frontiers of Scientific Computing Lecture Series

2:00 pm – 3:00 pm 1034 Digital Media Center

Clint Dawson, University of Texas at Austin
Can the Gulf Coast Protect Itself from Hurricane Storm Surge?

Abstract: The active hurricane seasons of the past decade have resulted in significant efforts to understand risk and attempt to mitigate storm surge from hurricanes and tropical storms. Mitigation systems may consist of shoring up existing levees and seawalls, building new structural protection systems, or maintaining or creating natural systems such as barrier islands and wetlands. Modeling and computer simulation are central to investigating the efficacy of these systems. Mathematical models and algorithms which are multi-scale, multi-physics, and high fidelity are required for these efforts. In this talk, we will describe two modeling systems for studying the impacts of surge and waves, and the application of these models to studying built and natural storm surge protection systems. The first model is the well-known Advanced Circulation Model (ADCIRC), which has been widely used to study Gulf storms. We will describe recent studies where this model has been applied to proposed mitigation systems in the Houston-Galveston region, and the complexities associated with these types of studies. The second model we will discuss is a novel fluid-structure model, based on large eddy simulation coupled with a beam equation, for modeling flow through dense and flexible vegetation. Applications to flow through wetlands will be described.

Monday, November 2, 2015

Posted September 13, 2015

Frontiers of Scientific Computing Lecture Series

10:30 am – 11:30 am 1034 Digital Media Center

Raymond Chan, Chinese University of Hong Kong
A Two-stage Image Segmentation Method Based on the Mumford-Shah Model with Thresholding

Abstract: The Mumford-Shah model is one of the most important image segmentation models, and has been studied extensively in the last twenty years. In this talk, we will first survey the past development of the method. Then we introduce our two-stage segmentation method based on the Mumford-Shah model. The first stage of our method is to find a smooth solution g to a convex variant of the Mumford-Shah model. Once g is obtained, then in the second stage, the segmentation is done by thresholding g into different phases. The thresholds can be given by the users or can be obtained automatically using any clustering methods. Because of the convexity of the model, g can be solved efficiently by standard techniques. We prove that our method is convergent and the solution g is always unique. Experimental results show that our two-stage method performs better than many standard two-phase or multi-phase segmentation methods for very general images, including anti-mass, tubular, MRI, noisy, and blurry images; and for very general noise models such as Gaussian, Poisson and multiplicative Gamma noise. We will also mention the generalization to color images.

https://www.cct.lsu.edu/lectures/two-stage-image-segmentation-method-based-mumford-shah-model-thresholding

Wednesday, March 30, 2016

Posted January 26, 2016

Frontiers of Scientific Computing Lecture Series

3:30 pm – 4:30 pm Digital Media Center 1034

Robert Falgout, Lawrence Livermore National Lab
Space-time Multigrid Solvers for Extreme-scale Scientific Computing

Abstract: Multigrid methods are important techniques for efficiently solving huge systems and they have already been shown to scale effectively on millions of cores. However, one of the major challenges facing computational science with future architectures is that faster compute speeds will be achieved through greater concurrency (more cores), since clock speeds are no longer increasing. Current petascale computers already have millions of cores, but future exascale machines are expected to have billions. This immense degree of parallelism requires a similar level of concurrency in the algorithms that run on them. One consequence of this is that time integration by traditional time marching will become a sequential bottleneck.

In this talk, we will first introduce the multigrid method, discuss its essential features, and provide basic information on its benefits for parallel scientific computing. We will then discuss our efforts to develop multigrid methods for parallel time integration. The approach we use is based on multigrid reduction (MGR) techniques and has the advantage of being easily integrated into existing codes because it builds directly on the original time-stepping scheme. Results for a variety of applications will also be presented. LLNL-ABS-681196.



Tuesday, October 25, 2016

Posted September 24, 2016

Frontiers of Scientific Computing Lecture Series

3:30 pm – 4:30 pm 1034 Digital Media Center

Dongbin Xiu, Ohio State University
Approximation Algorithms for Big Data

Abstract: One of the central tasks in scientific computing is to accurately approximate unknown target functions. This is typically done with the help of data samples of the unknown functions. In statistics this falls into the realm of regression and machine learning. In mathematics, it is the central theme of approximation theory. The emergence of Big Data presents both opportunities and challenges. On one hand, big data introduces more information about the unknowns and, in principle, allows us to create more accurate models. On the other hand, data storage and processing become highly challenging. Moreover, data often contain certain corruption errors, in addition to the standard noisy errors. In this talk, we present some new developments regarding certain aspects of big data approximation. More specifically, we present numerical algorithms that address two issues: (1) how to automatically eliminate corruption/biased errors in data; and (2) how to create accurate approximation models in very high dimensional spaces using stream/live data, without the need to store the entire data set. We present both the numerical algorithms, which are easy to implement, as well as rigorous analysis for their theoretical foundation.

Friday, November 10, 2017

Posted August 22, 2017
Last modified September 26, 2017

Frontiers of Scientific Computing Lecture Series

11:00 am – 12:00 pm 1034 Digital Media Center

Xiao-Chuan Cai, University of Colorado Boulder
Numerical Simulation of Blood Flows in Human Arteries

Abstract: We discuss a parallel multilevel domain decomposition algorithm for the simulation of blood flows in arteries by solving a system of nonlinear partial differential equations consisting of an elasticity equation for the artery and an incompressible Navier-Stokes system for the blood flow. The system is discretized with a finite element method on unstructured moving meshes in 3D and solved by a Newton-Krylov algorithm preconditioned with an overlapping Schwarz method. A non-standard, isogeometric coarse mesh is introduced to ensure that the algorithm is scalable in terms of the total compute time when the number of processors is large. Several mathematical, bio-mechanical, and supercomputing issues will be discussed in detail. Simulation of blood flows in patient-specific pulmonary and cerebral arteries will be presented.

Additional details at: https://www.cct.lsu.edu/lectures/numerical-simulation-blood-flows-human-arteries

Tuesday, October 2, 2018

Posted August 19, 2018
Last modified September 5, 2018

Frontiers of Scientific Computing Lecture Series

3:30 pm – 4:30 pm 1034 Digital Media Center

Nicholas Zabaras, University of Notre Dame
Bayesian Deep Learning for Predictive Scientific Computing

Abstract: We will briefly review recent advances in the solution of stochastic PDEs using Bayesian deep encoder-decoder networks. These models have been shown to work remarkably well for uncertainty quantification tasks in very-high dimensions. In this talk through examples in computational physics and chemistry, we will address their potential impact for modeling dynamic multiphase flow problems, accounting for model form uncertainty in coarse grained RANS simulations and providing the means to coarse graining in atomistic models. Emphasis will be given to the small data domain using Bayesian approaches. The training of the network is performed using Stein variational gradient descent. We will show both the predictive nature of these models as well as their ability to capture output uncertainties induced by the random input, limited data and model error. In closing, we will outline the integration of these surrogate models with generative adversarial networks for the solution of inverse problems. NOTE: Reception in 1034 DMC at 3pm. Additional details at: https://www.cct.lsu.edu/lectures/bayesian-deep-learning-predictive-scientific-computing