This is a draft of the program. Indicated times are Central European Time.

The colloquium is entirely virtual and free of charge. The meeting link is:

https://unilu.webex.com/meet/lars.beex

December 13, 2021
  • 08:45 - 09:00 Connect and welcome (Lars Beex, Luxembourg) https://unilu.webex.com/meet/lars.beex
  • 09:00 - 09:40 Bruno Sudret (KEYNOTE, ETH Zurich) Title: SURROGATE MODELS FOR EFFICIENT UNCERTAINTY QUANTIFICATION. Computational models are used in virtually all fields of applied sciences and engineering to predict the behavior of complex natural or man-made systems. Also known as simulators, they allow engineers to assess the performance of a system in-silico, and then optimize its design or operating. Realistic representations such as finite element models usually feature tens of parameters and are costly to run, even when taking full advantage of the available computer power. In parallel, the more complex the system, the more uncertainty in its governing parameters, environmental and operating conditions. In this respect, uncertainty quantification methods used to solve reliability, sensitivity or optimal design problems may require thousands to millions of model runs when using brute force techniques such as Monte Carlo simulation, which is not affordable with high-fidelity simulators. In contrast, surrogate models allow us to tackle these problems by constructing an accurate approximation of the simulator’s response from a limited number of runs at selected values (the so-called experimental design) and some learning algorithm. In this lecture, general features of surrogate models will be first introduced. Polynomial chaos expansions will then be discussed in details, together with their sparse version for high-dimensional problems. Recent extensions to dynamics will be addressed and applications in sensitivity will be shown as an illustration.
  • 09:40 - 10:05 Alice Cicirello (TU Delft) Title: INTERPRETABLE, EXPLAINABLE AND NON-INTRUSIVE PROPAGATION OF INTERVAL UNCERTAINTIES THROUGH EXPENSIVE-TO-EVALUATE MODELS. Detailed computational models are usually employed to assess the performance of an engineering solution and compare it to alternative solutions. However, it is often the case that little information is often available concerning the actual value and/or the inherent variability of the key input parameters of these detailed computational models. Often, these parameters are known only in terms of the maximum and minimum values, referred to as intervals or bounds. For example, these bounds might represent ’tolerance values’ on the parameters associated with the properties of a component of a structure (e.g. thickness of a structural component). Alternatively, they can be used to describe known limits on the values that an input variable of a model might take (e.g. Young Modulus). To assess the performance of an engineering solution under interval uncertainties, and therefore accurately evaluating the bounds on the response variable of interest, one would need to perform a large number of computer model evaluations. However, expensive-to-run detailed deterministic models and a large number of uncertainties usually characterize engineering problems. Several strategies have been developed to address this type of problems. The most successful ones are based on combining optimization algorithms with surrogate models. However, these approaches would require to evaluate a large number of runs to construct an accurate surrogate model. Therefore, they can entail a considerable computational effort if a large number of uncertain parameters need to be considered. During this talk, I will present two non-intrusive approaches for propagating interval uncertainties through expensive-to-evaluate models. A ML-based optimization strategy is developed to reduce the number of simulations to be evaluated and simultaneously quantify the uncertainty in the resulting response bounds estimates. In particular, Bayesian Optimization is employed for evaluating the upper and lower bounds of a generic response variable over the set of possible responses obtained when each interval variable varies independently over its range. The results obtained with proposed approaches are interpretable, since the selection of the next combinations of interval variables to be investigated is justified in terms of the trade-off between exploration and exploitation considering a probabilistic model that embeds both user knowledge and noise-free observations obtained with a physics-based model. The results are also explainable and they can be used to justify the decision of running additional simulations, thanks to the introduction of two metrics for verifying if the bound estimates are satisfactory. Moreover, the bounds predicted with the proposed approaches would over-predict the minimum and under-predict the true maximum of the response with respect to the current observations available. Indeed, this is an advantage in engineering applications, where a limited number of simulations can be investigated.
  • 10:05 - 10:30 Feras Alkam (Bauhaus University Weimar) Title: EIGENFREQUENCY-BASED BAYESIAN APPROACH FOR DAMAGE IDENTIFICATION IN CATERNARY POLES. This study proposes an efficient Bayesian, frequency-based damage identification approach to identify damages in cantilever structures with an acceptable error rate, even at high noise levels. The catenary poles of electric high-speed train systems were selected as a realistic case study to cover the objectives of this study. Compared to other frequency-based damage detection approaches described in the literature, the proposed approach is efficiently able to detect damages in cantilever structures to higher levels of damage detection, namely identifying both the damage location and severity using a low-cost structural health monitoring (SHM) system with a limited number of sensors; for example, accelerometers. The integration of Bayesian inference, as a stochastic framework, in the proposed approach makes it possible to utilize the benefit of data fusion in merging the informative data from multiple damage features, which increases the quality and accuracy of the results. The findings provide the decision-maker with the information required to manage the maintenance, repair, or replacement procedures.
  • 10:30 - 10:45 Coffee break Coffee break
  • 10:45 - 11:10 Ludovic Noels (Liege) Title: BAYESIAN INFERENCE OF MULTISCALE MODEL PARAMETERS WITH ARTIFICIAL NEURAL NETWORKS AS SURROGATE. In the context of multiscale models, it is not always possible to identify the constituents properties and inverse analysis is a way to identify them from experimental data conducted at the higher scale. For example, non-aligned Short Fibers Reinforced Polymer (SFRP) responses can be modelled by Mean- Field Homogenization (MFH) but some geometrical parameters, such as the effective aspect ratio, and some phase material parameters, such as the matrix model parameters, should be inferred from composite experimental responses in order to avoid extensive measurement campaigns at the micro-scale. In practice, because of the increase in the number of parameters in the non-linear models, this identification requires several loading conditions, and a unique set of parameters cannot reproduce all the experimental tests because, on the one hand, of the model limitations and, on the other hand, of the experimental errors. Bayesian Inference (BI) allows circumventing these difficulties, but requires a large amount of the model evaluations during the sampling process. Although MFH is computationally efficient, when considering non-aligned inclusions, the evaluation cost of a non-linear response for a given set of model and material parameters remains too prohibitive. In this work, a Neural-Network (NNW) is first trained using the MFH model, and is then used as a surrogate model during the BI process which is conducted using experimental composite coupon tests as observation data.
  • 11:10 - 11:35 Konstantinos Agathos (Exeter) Title: ACCELERATING FRACTURE MECHANICS SIMULATIONS FOR INVERSE PROBLEMS AND UNCERTAINTY QUANTIFICATION. Fracture mechanics problems are typically treated deterministically, assuming known material parameters, boundary conditions and loading. However, in most practical applications, these parameters can be subject to uncertainty, thus affecting the predicting capabilities of the models employed. Uncertainty quantification methods and, in the presence of experimental data, inverse problems allow to estimate or reduce this uncertainty but may require repeated model evaluations, which can be computationally infeasible. While Model Order Reduction (MOR) techniques could accelerate these evaluations, the localised nature of fracture and overall complexity of the problems, render their application challenging. In particular, parametric projection based methods, which can offer substantial speed ups, would be infeasible due to the very large parametric spaces required to account for phenomena such as crack propagation. On the other hand, adaptive techniques, which are much more effective for the considered problems, require interaction with the full order model, thus limiting the computational gains offered. In the present work, an alternative approach is proposed, where low-dimensional spaces are constructed from appropriately selected columns of the flexibility matrix of the system. It can be shown that these spaces contain the solution of the full order problem, while their dimension is much smaller. Nevertheless, to allow their online construction for arbitrary localised features, the full flexibility matrix of the system should be available. To this end, a Hierarchically Off-Diagonal Low Rank (HODLR) representation is used for the matrices involved, allowing compute the flexibility matrix efficiently and with reduced memory requirements. The efficiency of the approach can be enhanced through combination with adaptive techniques, achieving speed ups comparable to parametric methods, without sacrificing the flexibility and accuracy of the full order model.
  • 11:35 - 12:00 Sid Kumar (TU Delft) Title: DISCOVERING HYPERELASTIC CONSTITUTIVE LAWS WITHOUT STRESS DATA. Despite recent advances in the direction of data-driven methods, constitutive modelling of materials remains embedded in a supervised setting where the stress-strain pairs are assumed to be available. However, in most common experimental setup, it is difficult to probe the entire stress-strain space, while getting such labelled data is expensive via multiscale simulations. The biggest roadblock is - how does one even measure full stress tensors (forces are only boundary-averaged projections of stress tensors) for learning the stress-strain relations? To bypass these challenges, we recently proposed a new data-driven approach called EUCLID (https://euclid-code.github.io) which stands for - Efficient Unsupervised Constitutive Law Identification and Discovery. The approach is unsupervised, i.e., it requires no stress data but only displacement and global force data, which are realistically available through mechanical testing and digital image correlation techniques. The problem of unsupervised discovery is solved by leveraging physical laws such as conservation of linear momentum in the bulk and at the loaded boundary of the domain. Developing this approach further, we discover physically interpretable models embodied by either - (i) parsimonious mathematical expressions discovered through a hierarchical-Bayesian sparse regression of a large catalogue of candidate functions, or (ii) ensemble of physics-consistent neural networks with higher generalization capability at the cost of analytical treatment. In both cases, the approach automatically estimates the noise in the data and discovers multi-modal probabilistic models with quantifiable uncertainties in the stress-strain response. We demonstrate the ability of the approach to discover several hyperelastic constitutive laws - including e.g., Mooney-Rivlin, Arruda-Boyce, and Ogden models, without using any stress data.
  • 12:00 - 13:30 Lunch break Lunch break
  • 13:30 - 14:10 Mark Girolami (KEYNOTE, Alan Turing/Cambridge) Title: A STATISTICAL CONSTRUCTION OF THE FINITE ELEMENT METHOD. The finite element method (FEM) is one of the great triumphs of applied mathematics, numerical analysis, software development, and engineering analysis. Recent developments in sensor and signalling technologies enable the phenomenological study of a diverse range of engineered systems. The connection between sensor data and FEM is restricted to solving inverse problems placing unwarranted faith in the fidelity of the mathematical description of the system under consideration. If one concedes misspecification between generative reality and the FEM then a framework to systematically characterise this uncertainty is required. This talk will present a statistical construction of the FEM, where the classical Hilbert space function reconstruction is endowed with an appropriate probability measure giving rise to a probabilistic representation of the FEM, and associated error analysis, which leads naturally to a practical methodology which systematically blends mathematical description with observations.
  • 14:10 - 14:35 Elizabeth Cross (Sheffield) Title: PROBABILISTIC ASSESSMENTS OF STRUCTURAL HEALTH WITH PHYSICS-INFORMED MACHINE LEARNING. The use of machine learning in engineering, and particularly in Structural Health Monitoring, is becoming more common, as many of the inherent tasks (such as regression and classification) in developing condition-based assessment fall naturally into its remit. This talk will focus on probabilistic approaches to physics-informed machine learning, where one adapts ML algorithms to account for the physical insight an engineer will often have of the structure they are attempting to model or assess. We will show how grey-box models, that combine simple physics-based models with data-driven ones, can improve predictive capability in an SHM setting - both in terms of accuracy and confidence. Examples shown will consider uncertainty propagation from these predictive models into a fatigue assessment.
  • 14:35 - 15:00 Bojana Rosic (Twente) Title: SPARSE BAYESIAN LEARNING FOR STOCHASTIC COMPUTATIONAL MECHANICS. In a Bayesian setting, inverse problems and uncertainty quantification (UQ)—the propagation of uncertainty through a computational (forward) mechanical model—are strongly connected. Whether the surrogate model or the parameter set/the model state are to be estimated given data, the Bayesian identification offers computationally attractive alternatives to the classical non-intrusive estimation approaches. In this talk both parameter identification and the surrogate model estimation will be presented by taking into account conditional approximation of Bayes posterior, as well as various approximations of the forward and inverse maps. In particular, the special attention will be paid to two most popular functional approximations: the polynomial chaos expansion and the deep neural network. Finally, few examples will be presented in the context of nonlinear mechanics and computational fluid dynamics problems.
  • 15:00 - 15:15 Coffee break Coffee break
  • 15:15 - 15:40 Yupeng Zhang (Texas A&M) Title: CHARACTERIZATION OF PLASTIC PROPERTIES FROM INDENTATION. Instrumented indentation tests provide an attractive means for obtaining data to characterize the plastic response of engineering materials. However the connection between indentation responses and material plastic properties is not straightforward. One difficulty is that the relation between the measured indentation force versus indentation depth response and the plastic stress-strain response is not necessarily unique. We consider results for three sets of plastic material properties that give rise to essentially identical curves of indentation force versus indentation depth in conical indentation. The corresponding surface profiles after unloading are also calculated. These computed results are regarded as the “experimental” data. A Bayesian-type statistical approach is used to identify the values of flow strength and strain hardening exponent for each of the three sets of material parameters. The effect of fluctuations (“noise”) superposed on the “experimental” data is also considered. We build the database for the Bayesian-type analysis using finite element calculations for a relatively coarse set of parameter values and use interpolation to refine the database. A good estimate of the uniaxial stress-strain response is obtained for each material both in the absence of fluctuations and in the presence of sufficiently small fluctuations. Generally, the form of strain hardening relation that gives the best fit for a material is not known a priori. The influence of assumed strain hardening relation on stress-strain identification is studied and shows that the identified uniaxial stress-strain response is not very sensitive to the form of the power law strain hardening relation chosen even with data that have significant noise. Finally the identification of plastically compressible solids via spherical indentation is studied.
  • 15:40 - 16:05 Rudy Geelen (Oden Institute) Title: LOCALIZED NON-INTRUSIVE REDUCED-ORDER-MODELLING IN THE OPERATOR INFERENCE FRAMEWORK. This talk introduces a new approach for data-driven learning of localized reduced models. Localized model reduction methods use multiple local approximation subspaces to construct the reduced model. In contrast to a fixed global approximation subspace, localization permits adaptation of the reduced model to local dynamics, thereby keeping the reduced-order dimension small. This is particularly important for constructing reduced models of nonlinear systems of partial differential equations, where the solution may be characterized by different physical regimes and may exhibit high sensitivity to parameter variations. The contribution of our work is a localized reduction approach that can be applied non-intrusively; that is, the reduced model is constructed entirely from snapshot data and does not require access to the high-fidelity discretized operators. This makes the approach accessible, portable, and applicable to a broad range of problems, including those that use proprietary or legacy high-fidelity codes. In the offline phase, our approach partitions the solution state space into subregions and constructs a local reduced-order basis for each subregion. The Operator Inference (OpInf) approach is employed to find the local reduced matrix operators that yield the reduced model that best matches the projected snapshot data in a minimum-residual sense. Then in the online solution of the reduced model, a local basis is chosen adaptively based on the current system state. We demonstrate the potential of the localized OpInf approach for achieving large computational speedups while maintaining good accuracy for several different nonlinear systems, including a simple Burgers’ equation example and a more challenging phase-field problem governed by the Cahn-Hilliard equations.
  • 16:05 - 16:45 Lori Brady (KEYNOTE, Johns Hopkins) Title: DATA-DRIVEN APPROACHES FOR SURROGATE MODELS OF MECHANICS IN MULTI-PHASE MATERIALS. Uncertainty quantification for physics-based models is challenged by both sparse data from materials characterization and significant computational effort associated with these models. Machine learning provides a construct to circumvent these issues, by enhancing existing data through material reconstruction and by offering efficient surrogate models. In particular, digital reconstruction of three-dimensional stochastic microstructures based on transfer learning allows generation of multiple microstructures based on a limited set of reference microstructures. These microstructures provide a statistically significant set of digitally generated samples, allowing for an assessment of the uncertainty associated with random microstructure sampling. Furthermore, image to image mapping from composite microstructure to stress distribution, using an encoder-decoder Unet architecture, provides a rapid approach for identifying micro-scale stresses from macro-scale models. This talk will focus on these two approaches, discussing some of the very promising results, their potential to enhance uncertainty quantification for mechanics of multi-phase materials, and the potential limitations of the proposed techniques that can be addressed in the future.
December 14, 2021
  • 10:15 - 10:30 Connect and welcome (Lars Beex, Luxembourg) https://unilu.webex.com/meet/lars.beex
  • 10:30 - 11:10 Phaedon-Stelios Koutsourelakis (KEYNOTE, Munchen) Title: DATA-DRIVEN INVERSION OF THE PROCESS-STRUCTURE-PROPERTY CHAIN FOR THE DESIGN OF RANDOM MATERIAL MICROSTRUCTURES. This talk is concerned with enabling and accelerating intelligent design of new materials through the development of novel probabilistic machine learning models. The fundamental goal to engineer materials which exhibit superior or targeted properties is synonymous to the ability to understand, model and invert the Process-Structure-Property (P-S-P) chain. Processing variables (e.g. heating/cooling schedule, composition) determine the formation of material microstructures in a stochastic manner (Process- Structure link). In turn, variations in the (statistical characteristics of these) microstructures affect their effective properties (Structure-Property link) which we seek to control. While forward modeling along the P-S-P chain still poses several challenges, particularly in multi-physics/scale settings, the inversion of these causal relations has received much less attention due to significant theoretical and practical difficulties. High-throughput computational tools that can also provide derivatives/sensitivities are indispensable in order to explore the multi-dimensional design space. The complexity and cost of resolving the underlying relations in turn necessitates suitable surrogate models that retain predictive accuracy. Existing strategies ignore the presence of uncertainties, rely on Big Data and direct transfer of technologies from Machine Learning and fail to couple surrogate development with the optimization problem that ultimately needs to be solved. We propose a holistic, integrative methodological framework for inverting the P-S-P chain in order to identify the accessible processing variables which give rise to (micro)structures that achieve propertyrelated objectives. In view of the limitations in the state of the art, our framework: a) accounts for the whole P-S-P chain, b) quantitatively models uncertainties from all aforementioned sources and incorporates them in the optimization objective, c) makes use of data-driven surrogates that incorporate physical, domain knowledge both in black-box and grey-box settings in order to improve accuracy and reduce data requirements (Small Data), and d) couples surrogate construction with the solution of the stochastic optimization problem in order to enable adaptive learning.
  • 11:10 - 11:35 Keith Worden (Sheffield) Title: ON GENERATIVE MODELS AS THE BASIS FOR DIGITAL TWINS. A framework is proposed for generative models as a basis for digital twins or mirrors of structures. The proposal is based on the premise that deterministic models cannot account for the uncertainty present in most structural modelling applications. Two different types of generative models are considered here. The first is a physics-based model based on the stochastic finite element (SFE) method, which is widely used when modelling structures that have material and loading uncertainties imposed. Such models can be calibrated according to data from the structure and would be expected to outperform any other model if the modelling accurately captures the true underlying physics of the structure. The potential use of SFE models as digital mirrors is illustrated via application to a linear structure with stochastic material properties. For situations where the physical formulation of such models does not suffice, a data-driven framework is proposed, using machine learning and conditional generative adversarial networks (cGANs). The latter algorithm is used to learn the distribution of the quantity of interest in a structure with material nonlinearities and uncertainties. For the examples considered in this work, the data-driven cGANs model outperform the physics-based approach. Finally, an example is shown where the two methods are coupled such that a hybrid model approach is demonstrated.
  • 11:35 - 12:00 Jack Hale (Luxembourg) Title: USING THE MALLIAVIN DERIVATIVE AS A MEASURE OF SENSITIVITY IN STOCHASTIC MECHANICS PROBLEMS. The Malliavin calculus extends the classical calculus of variations to stochastic processes. In this talk I will show how the Malliavin calculus can be used to efficiently calculate the sensitivity of stochastic mechanics problems with respect to the parameters of their underlying distributions. This technique is well-established in computational finance, but has seen little use in mechanics. I will then show two examples, the first involving a simple viscoelastic SDE system, and a second involving a SPDE of a hyperelastic column undergoing buckling. I will explain how the Malliavin derivative can give additional insight into stochastic system behaviour beyond classical notions of the derivative.
  • 12:00 - 13:30 Lunch break Lunch break
  • 13:30 - 13:55 Pierre Kerfriden (MinesParisTech/Cardiff) Title: THE BAYESIAN FINITE ELEMENT METHOD: A PROBABILISTIC PDE SOLVER WITH IN-BUILT ERROR ESTIMATOR. The finite element method is routinely used to aid in the design and certification of engineering systems. This method is now relatively well mastered, owing in particular to the availability of tools for verifying the quality of the calculations, ranging from simple mesh convergence studies to advanced a posteriori error estimation methods together with mesh adaptivity. However, the state of the art in terms of finite element error control remains limited. The most advanced approaches, which are based on a posteriori error estimation, are restricted to the evaluation of errors in pre-defined scalar quantities of interest, typically some functional of the error field. Moreover, these methods do not easily integrate into probabilistic numerical chains (e.g. data assimilation, Bayesian inverse problems, optimum control under uncertainty). as they deliver point estimators of errors in quantity of interest, rather than probability distributions. In this talk, we will describe a new approach based on Bayesian probabilistic numerical methods. In this context, the solution of the finite element problem is reinterpreted as the solution of a data assimilation problem, similar to that of field reconstruction in geostatistics, or Gaussian process regression in machine learning. To this end, a prior density of probability is defined for the unknown finite element field. The equations to be solved as part of the traditional deterministic finite element method are subsequently reinterpreted as partial observations of this unknown field. It is then possible to obtain a formal expression for the posterior probability density corresponding to the finite element solution. We will describe and justify the successive steps for this construction, and propose an efficient sampler.
  • 13:55 - 14:20 Costas Papadimitriou (Thessaly) Title: ROBUST BAYESIAN OPTIMAL EXPERIMENTAL DESIGN IN STRUCTURAL DYNAMICS. Data collected from experiments or monitoring systems are used to reduce the uncertainties involved in the process of building a digital twin of a physical system, estimating the parameters of the models involved, and providing reliable predictions of output quantities of interest (QoI). These uncertainties are crucial in monitoring system health, safety and performance, and in making optimal decisions regarding system maintenance. The objective in optimal experimental design (OED) is to optimize the design of the experimental setup or monitoring systems such that the most informative data are obtained to reduce the uncertainties involved in a digital twin and its predictions. We propose a Bayesian OED framework based on maximizing a utility function built from appropriate measures of information in the data. Specifically, the expected Kullback-Liebler divergence between the prior and posterior distribution of the model parameters or response predictions is used as a measure of information gain from an experiment. The design variables may include the type, location and number of sensors and actuators, as well as the characteristics of the excitation (amplitude variation and frequency content). The OED methodology is applicable to mechanics and structural dynamics problems for estimating model parameters and predicting the response of important QoI using input-output measurements, as well as virtual sensing using output-only vibration measurements. Asymptotic approximations, valid for large number of data, are proposed for simplifying the multidimensional integrals arising in the formulation. Such approximations provide insightful information about the optimal design of sensor configurations. It is pointed out that the optimal experimental design depends on uncertainties in the system and prediction error modeling process, as well as uncertainties in the characteristics of the applied loads. In particular, the loads (earthquake, wind, water waves, turbulence, etc.) are the most uncertain quantities in the experimental design phase, often modelled by stochastic processes based on design spectra. The utility function is thus extended to make the optimal design robust to these uncertainties. Heuristic algorithms for solving the optimization problem and computational issues are discussed. Examples are presented to demonstrate the effectiveness of the proposed formulation.
  • 14:20 - 14:45 Hussein Rappel (Alan Turing/Cambridge/Exeter) Title: PROBABILISTIC MODELING AND IDENTIFICATION OF INTERCORRELATED BOUNDED RANDOM FIELDS: APPLICATION TO LINEAR ELASTIC STRUTS AND FIBERS. Various materials and structures consist of numerous slender struts or fibers. Manufacturing processes of different types of struts and growth processes of natural fibers lead to fluctuations in mechanical responses from strut to strut and within each strut. Since the conventional 3D finite elements for these struts lead to computationally inefficient models, often, each strut is modeled by a string of beam elements. Ideally, the parameter fields (input parameter fields) of each string of beam elements are such that the fluctuations for each string and between individual strings are reproduced accurately. In this study, we model the spatially varying parameter fields of the beam representation as several (five to be exact) intercorrelated random fields. We furthermore discuss the modeling of the intercorrelated random fields using Gaussian copula processes to impose some physical constraints (i.e. the physical parameters describing the geometries of struts should be positive) and their identification. The aim is to identify the parameters of the intercorrelated random input fields with a small number of known strut geometries so that we properly capture both the reaction forces and reaction moments varying between struts and the spatial fluctuations of centerline displacements. Since the experimental characterization of strut geometries is typically time-consuming, we consider only a few geometries. The small number of known geometries makes the identification problem ill-posed and introduces substantial uncertainties. For this reason, we employ a probabilistic framework to identify the parameters of the random fields.
  • 14:45 - 15:10 Johann Guilleminot (Duke) Title: STOCHASTIC MODELING FOR CONSTRAINED PARAMETRIC UNCERTAINTIES ON COMPLEX GEOMETRIES. In this talk, we describe the construction of admissible, identifiable stochastic models for constrained parametric uncertainties on complex geometries, described by smooth, nonconvex manifolds. We focus on stochastic constitutive models and geometrical imperfections. We specifically present theoretical and computational procedures to ensure well-posedness in the forward propagation step, as well as results pertaining to model calibration based on physical experiments. Various applications in computational biomechanics and additive manufacturing are finally presented. This is joint work with Ph.D. students Peiyi Chen, Shanshan Chu, and Hao Zhang at Duke University.
  • 15:10 - 16:35 Ethan Pickering (MIT) Title: ACTIVE LEARNING OF NONLINEAR OPERATORS VIA NEURAL NETS FOR PREDICTING EXTREME EVENTS. Through approximations of nonlinear operators via neural networks, we develop a framework for computing predictors of extreme events (i.e. instabilities) in infinite-dimensional systems. Extreme phenomena or instabilities, such as pandemic spikes, electrical-grid failure, or rogue waves, have catastrophic consequences for society. Thus, accurate modeling of their dynamical behavior is critical for mitigating impact. Unfortunately, characterizing extreme events is difficult because of their rarity of occurrence, the infinite-dimensionality of the dynamics, and the stochastic perturbations that excite them. These challenges are problematic as standard training of machine learning models assumes both plentiful data and moderate dimensionality. Neither is the case for extreme events. To navigate these challenges, we combine a neural network architecture designed for approximating infinite-dimensional, nonlinear operators with novel training schemes that actively select data for characterizing extreme events. We apply and assess these methods to prototype systems for deep-water waves having the form of partial differential equations. In this case, the extreme events take the form of randomly triggered modulation instabilities. Finally, we conclude by discussing the generality of this approach for modeling other extreme phenomena.