London Mathematical Society -- EPSRC Durham Symposium
Building bridges: connections and challenges in modern approaches to numerical partial differential equations
2014-07-07 to 2014-07-16

Abstracts of Talks

Assyr Abdulle: Multiscale methods for parabolic and hyperbolic problems

Recent numerical homogenization methods for nonlinear parabolic equations and for linear hyperbolic equations are discussed. We first introduce the considered numerical method --the finite element heterogeneous multiscale method (FE-HMM)-- for elliptic problems and briefly discuss its combination with reduced order modeling techniques such as the reduced basis method. For nonlinear monotone parabolic problems, we discuss a method that combines the implicit Euler method in time with the FE-HMM in space and derive optimal fully discrete a priori error estimates in both space and time. The upscaling procedure of the method however relies on nonlinear elliptic micro problems. As this is computationally costly for practical simulations, we also discuss a new linearized scheme that is much more efficient as it only involves linear micro problems. Finally, for wave problems in heterogeneous media, we propose a multiscale method which captures not only the short-time macro scale behaviour but also dispersive effects that appear in the true solution with increasing time but are not present in the homogenized model. References: [1] A. Abdulle, Y. Bai and G. Vilmart Reduced basis FE-HMM for quasilinear elliptic homogenization problems, to appear in Discrete Contin. Dyn. Syst. 2014. [2] Abdulle A, Bai Y. 2014 Reduced-order modelling numerical homogenization, Phil. Trans. R. Soc. A 372. 2014. [3] A. Abdulle and M. Huber Finite element heterogeneous multiscale method for nonlinear monotone parabolic homogenization problems: a fully discrete space-time analysis, preprint 2014. [4] A. Abdulle, M. Huber and G. Vilmart Linearized numerical homogenization methods for nonlinear monotone parabolic multiscale problems, preprint 2014. [5] A. Abdulle, M. Grote, C. Stohrer Finite element heterogeneous multiscale method for the wave equation: long-time effects, to appear in SIAM Multiscale Model. Simul. 2014

Mark Ainsworth: Bernstein-Bezier Polynomials for High Order Finite Element Approximation

We explore the use of Bernstein polynomials as a basis for finite element approximation on simplices in any spatial dimension. The Bernstein polynomials have a number of interesting properties that have led to their being the industry standard for visualisation and CAGD. It is shown that the basis enables the element matrices for the standard finite element space, consisting of continuous piecewise polynomials of degree n on simplicial elements in $R^d$, to be computed in optimal complexity $\mathcal{O}(n^{2d})$. The algorithms take into account numerical quadrature; are applicable to nonlinear problems; and do not rely on precomputed arrays containing values of one-dimensional basis functions at quadrature points (although these can be used if desired). The standard tools for the evaluation of Bezier curves and surfaces is the de Casteljau algorithm. The archetypal pyramid algorithm is the de Casteljau algorithm. Pyramid algorithms replace an operation on a single high order polynomial by a recursive sequence of self-similar affine combinations and, as such, offer signficant advantages for high order finite element approximation. We develop and analyze pyramid algorithms for the efficient handling of all of the basic finite element building blocks, including the assembly of the element load vectors and element stiffness matrices. The complexity of the algorithm for generating the element stiffness matrix is optimal. A new, nonuniform order, variant of the de Casteljau algorithm is developed that is applicable to the variable polynomial order case but incurs no additional complexity compared with the original algorithm. The work provides the methodology that enables the efficient use of a completely general distribution of polynomial degrees without any restriction in changes between adjacent cells, in any number of spatial dimensions.

Blanca Ayuso: Nonconforming virtual elements for elliptic problems

We introduce the nonconforming Virtual Element Method (VEM) for the approximation of second order elliptic problems. We present the construction and error analysis of the new element highlighing the main differences with the conforming VEM and with classical nonconforming finite element methods. The talk is based on joint work with K. Lipnikov and G. Manzini

Lehel Banjai: Oblivious quadrature for long-time computation of waves

Propagation of waves in 2 dimensions, or in the presence of damping, c.f. viscoelastodynamics, follows the weak Huygen's principle. This makes long time computations expensive when using time-domain boundary integral equations as the complete history needs to be stored . We will show how the smoothness of this tail can be exploited to perform fast computations. In particular, we will show that the late time propagation is governed by a parabolic operator and that consequently oblivious quadrature can be applied. The resulting algorithm requires $O(log N)$ extra storage and computation compared to the computation of problems obeying the strict Huygen's principle. The method will be illustrated by numerical examples.

Gabriel Barrenechea: Opening remarks

Lourenço Beirão da Veiga: An introduction to the Virtual Element Method

The Virtual Element Method (VEM), is a very recent technology introduced in [Beirao da Veiga, Brezzi, Cangiani, Manzini, Marini, Russo, 2013, M3AS] for the discretization of partial differential equations. The VEM can be interpreted as a novel approach that shares the same variational background as the Finite Element Method but enjoys also a connection with modern Mimetic schemes. By avoiding the explicit integration of the shape functions that span the discrete Galerkin space and introducing a novel construction of the associated stiffness matrix, the VEM acquires very interesting properties and advantages with respect to more standard Galerkin methods, yet still keeping the same coding complexity. For instance, the VEM easily allows for polygonal/polyhedral meshes (even non-conforming) with non-convex elements and possibly with curved faces; it allows for discrete solutions of arbitrary C^k regularity, defined on unstructured meshes. The present talk is an introduction to the VEM, aiming at showing the main ideas of the method. In the first part of the talk we will describe the basics of the method on a simple model problem. These will include the construction of the method, the convergence analysis, minimal implementation guidelines and some numerical tests. In the second part, we will present some further advancement (among the various in the literature), namely high regularity VEM spaces and the (incompressible) linear elasticity problem.

Pavel Bochev: A new parameter-free stabilization approach for advection-diffusion equations based on H(curl)-lifting of multi-scale fluxes.

Author: P. Bochev Title: A new parameter-free stabilization approach for advection-diffusion equations based on H(curl)-lifting of multi-scale fluxes. We present a family of stabilized control volume (CV) and finite element (FE) methods for advection-diffusion equations based on a new, multi-scale approximation of the total flux. The latter is defined by an H(curl) lifting of one-dimensional edge fluxes into the mesh elements by using suitable curl-conforming elements. These fluxes are obtained from analytic solutions of the governing equations restricted to the mesh edges. In so doing we obtain a multi-scale approximation of the flux that is stable in the advective limit and does not involve any tunable mesh-dependent stabilization parameters. In the lowest-order case the edge fluxes are obtained by a procedure similar to the Scharfetter-Gummel unwinding and so, resulting CV methods can be viewed as multidimensional extensions of this classical scheme to arbitrary unstructured grids. This feature sets our CV formulations apart from other Scharfetter-Gummel extensions to multiple dimensions, which require control volumes that are topologically dual to the primal grid. In the higher-order case the edge fluxes are defined on suitable mesh segments comprising multiple edges by a procedure that ``bootstraps'' the classical Scharfetter-Gummel approach. Accordingly we perform the H(curl) lifting by using edge elements that match the accuracy of the edge fluxes and whose degrees of freedom are collocated with their positions. We extend these ideas to FE formulations by using the multi-scale flux to define a stabilizing H(curl) diffusion kernel. Symmetrization of this kernel yields an artificial diffusion term that can be used to stabilize a standard Galerkin formulation of the governing equations without requiring tunable mesh-dependent stabilization parameters. To conclude the talk we will briefly touch upon the implementation of these schemes in Sandia's semiconductor device modeling code CHARON and present numerical results for a suite of standard advection tests, and simulations of a PN diode and an n-channel MOSFET device, which demonstrate the performance of the methods for a fully coupled drift-diffusion system. This is joint work with K. Peterson, M. Perego and X. Gao.

Annalisa Buffa: Isogeometric mortaring

We introduce the mortar method for isogeometric discretization and discuss various choices of Lagrange multipliers to enforce the weak continuity at the subdomain interfaces. Moreover, we will discuss the application of the mortar method to contact mechanics.

Erik Burman: Stabilized finite element methods for non symmetric, non coercive and ill-posed problems

In numerical analysis the design and analysis of computational methods is often based on, and closely linked to, a well-posedness result for the underlying continuous problem. In particular the continuous dependence of the continuous model is inherited by the computational method when such an approach is used. In this talk our aim is to design a stabilised finite element method that can exploit continuous dependence of the underlying physical problem without making use of a standard well-posedness result such as Lax-Milgram's Lemma or The Babuska-Brezzi theorem. This is of particular interest for inverse problems or data assimilation problems which may not enter the framework of the above mentioned well-posedness results, but can nevertheless satisfy some weak continuous dependence properties. First we will discuss non-coercive elliptic and hyperbolic equations where the discrete problem can be ill-posed even for well posed continuous problems and then we will discuss the linear elliptic Cauchy problem as an example of an ill-posed problem where there are continuous dependence results available that are suitable for the framework that we propose.

Alexey Chernov: Improved stability estimates for the hp-Raviart-Thomas projection operator on quadrilaterals

Stability of the hp-Raviart-Thomas projection operator as a mapping H^1(K) -> H^1(K) on the reference cube K in R3 has been addressed e.g. in [2], see also [1,3]. These results are suboptimal w.r.t. the polynomial degree. In this talk we present improved stability estimates for the hp-Raviart-Thomas projection operator on quadrilaterals and implications for analysis of the mixed hp-DGFEM. (joint work with Herbert Egger, TU Darmstadt) [1] Mark Ainsworth and Katia Pinchedez. hp-approximation theory for BDFM and RT finite elements on quadrilaterals. SIAM J. Numer. Anal., 40(6):2047–2068 (electronic) (2003), 2002. [2] Dominik Schötzau, Christoph Schwab, and Andrea Toselli. Mixed hp-DGFEM for incompressible flows. SIAM J. Numer. Anal., 40(6):2171–2194 (electronic) (2003), 2002. [3] Dominik Schötzau, Christoph Schwab, and Andrea Toselli. Mixed hp-DGFEM for incompressible flows. II. Geometric edge meshes. IMA J. Numer. Anal., 24(2):273–308, 2004.

Paul Childs: Numerical analysis and seismic imaging

Modern approaches to numerical solution of PDEs have not been much used in the seismic industry. In this talk we give an overview of industrial seismic imaging, review the established solvers used in industry, and consider the use of new generation PDE solvers for oilfield applications.

Snorre Christiansen: Upwinding in finite element systems

We present an upwinding technique for finite elements which is compatible with the de Rham sequence. It is related to exponential fitting. For this method applied to convection diffusion equations we also present a stability estimate.

Bernardo Cockburn: The HDG methods I-II

We provide an introduction to the devising and analysis of the so-called hybridizable discontinuous Galerkin (HDG) methods. We do this in the framework of steady-state diffusion problems. After showing how to devise and implement the methods, we uncover their basic stabilization mechanism and discuss the two roles played by their stabilization function. Then the relation between HDG, mixed methods and finite volume methods will be explored. Finally, the extension of the HDG methods to various PDEs will be sketched.

Oleg Davydov: Kernel Based Finite Difference Methods

I will present an overview of the recent meshless techniques for PDEs based on finite difference type discretisation via numerical differentiation on irregular centres with the help of positive definite kernel interpolation.

Andreas Dedner: Discontinuous Galerkin methods for surface pdes

We will be studying the discretization of pdes on general surfaces using the Discontinuous Galerkin method. For a linear advection diffusion equation a-priori error estimates are derived. The main challenge is due to the approximation of the surface by a non smooth surface on which we solve the problem. This approximation leads to problems with the element normals and thus with the interelement fluxes, which have to carefully chosen to guarantee stability. The scheme is implemented in the DUNE framework and is part of a project to couple different kinds of surface-bulk pdes using different discretization techniques. We are developing an infrastructure to couple bulk and surface finite element methods, well as bulk finite elements with boundary element methods.

Leszek Demkowicz: Discontinuous Petrov Galerkin (DPG) Method with Optimal Test Functions I-II

The coming June will mark the fifth anniversary of the first two papers in which Jay Gopalakrishnan and I proposed a novel Finite Element (FE) technology based on what we called the ``ultra-weak variational formulation'' and the idea of computing (approximately) optimal test functions on the fly [1,2,3]. We called it the ``Discontinuous Petrov Galerkin Method''. Shortly afterward we learned that we owned neither the concept of the ultra-weak formulation nor the name of the DPG method, both introduced in a series of papers by colleagues from Milano: C. L. Bottasso, S. Micheletti, P. Causin and R. Sacco, several years earlier. The name ``ultra-weak'' was stolen from O. Cessenat and B. Despres. But the idea of computing optimal test functions was new... From the very beginning we were aware of the fact that the Petrov-Galerkin formulation is equivalent to a Minimum Residual Method (generalized Least Squares) in which the (minimized) residual is measured in a dual norm, the idea pursued much earlier by colleagues from Texas A&M: J. Bramble, R. Lazarov and J. Pasciak. Jay and I were lucky; a few months after putting [1,2] on line, Wolfgang Dahmen and Chris Schwab presented essentially the same approach pointing to a connection with mixed methods and the fact that the use of discontinuous test functions is not necessary. So, five years, over 20 papers and three Ph.D. dissertations later, I will attempt to summarize in the two lectures the fundamentals, present extensive numerical results and outline the current frontier. For more up-to-date information, visit for presentations given during the first Least Squares/DPG workshop organized at ICES in past November. Lecture 1: The DPG method guarantees stability for any well-posed linear problem. We will discuss the equivalence of several formulations: Petrov-Galerkin method with optimal test functions, minimum residual formulation and a mixed formulation. We will summarize well-posedness results for formulations with broken test functions: the ultra-weak formulation based on first order systems and the formulation derived from standard second order equations. Standard model problems: Poisson, linear elasticity, Stokes, linear acoustics and Maxwell equations, will be used to illustrate the methodology with h-, p-, and hp-convergence tests. The DPG method comes with a posteriori-error evaluator (not estimator...) built in which provides a natural framework for adaptivity. Lecture 2: Singular perturbation problems. Extrapolation to nonlinear problems. The DPG methodology allows for controlling the norm in which we want to converge by selecting the right norm for residual. I will show how the idea translates into superb stability properties (and eliminates the need for stabilization) for convection dominated problems: convection-dominated diffusion, incompressible and compressible Navier Stokes equations [4,5,6,7]. [1] L. Demkowicz and J. Gopalakrishnan, ``A class of discontinuous Petrov-Galerkin methods. PartI: The transport equation,'' CMAME: 199, 23-24, 1558-1572, 2010. [2] L. Demkowicz and J. Gopalakrishnan, ``A class of discontinuous Petrov-Galerkin methods. Part II: Optimal test functions,'' Num. Meth. Part. D.E.:27, 70-105, 2011. [3] ``An Overview of the DPG Method , ICES Report 2013/2, also in ``Recent Developments in Discontinuous Galerkin Finite Element Methods for Partial Differential Equations'', eds: X. Feng, O. Karakashian, Y. Xing, IMA Publications, Springer-Verlag, 2013. [4] L. Demkowicz and N. Heuer,``Robust DPG Method for Convection-Dominated Diffusion Problems'', SIAM J. Num. Anal 51: 2514-2537, 2013, see also ICES Report 2011/13. [5] J. Chan, N, Heuer, T Bui-Thanh and L. Demkowicz, `` Robust DPG Method for Convection-dominated Diffusion Problems II: Natural Inflow Condition'', Comput. Math. Appl., 2013, in print, see also ICES Report 2012/21. [6] Nathan Roberts. ``A Discontinuous Petrov-Galerkin Methodology for Incompressible Flow Problems'', PhD thesis, University of Texas at Austin, August 2013. (supervisors: L. Demkowicz and R. Moser). [7] Jesse Chan,``A DPG Method for Convection-Diffusion Problems'', PhD thesis, University of Texas at Austin, July 2013 (supervisors: L. Demkowicz and R. Moser).

Alan Demlow: A posteriori error estimation in the finite element exterior calculus framework

We will give an overview of a residual-type a posteriori error estimation techniques applied to finite element approximations of Hodge-Laplace problems within the finite element exterior calculus (FEEC) framework. Special attention will be given to harmonic forms, their adaptive approximation, and how the quality of their approximation affects the overall error in approximating solutions to the Hodge-Laplace problem.

Charles Elliott: Evolving Surface Finite Element method

I will discuss the use of evolving finite element spaces for the numerical solution of PDEs on evolving domains.

Alexandre Ern: Hybrid high-order schemes on general meshes for elliptic PDEs

We develop and analyze a family of arbitrary-order, compact-stencil discretization schemes for elliptic PDEs on polyhedral meshes. The key idea is to reconstruct differential operators cell-wise in terms of the local degrees of freedom. Optimal error estimates for the flux and the potential are derived and illustrated numerically. Links with other recent approaches from the literature are discussed. The methodology is also applied to linear elasticity problems, leading to locking-free schemes.

Ivan Graham: On shifted Laplace and related preconditioners for finite element approximations of the Helmholtz equation

As a model problem for high-frequency wave scattering, we study the boundary value problem (1): $- (\Delta + k^2) u = f$ in $\Omega$, $\partial u/\artial n - i k u = g$ on $\GammaN$, where $\Omega$ is a bounded domain in $\mathbb{R}^d$ with boundary $\Gamma$. Our results also apply to sound-soft scattering problems in truncated exterior domains. Finite element approximations of this problem for high wavenumber $k$ are notoriously hard to solve. The analysis of Krylov space-based iterative solvers such as GMRES is also hard, since the corresponding system matrices are complex, non-Hermitian and usually highly non-normal and so information about spectra and condition numbers of the system matrices generally does not give much information about the convergence rate of iterative methods. Quite a lot of recent research has focussed on preconditioning (1) using an approximate solution of the ``shifted Laplace'' problem (2): $- (\Delta + k^2 + i \epsilon) u = f$ in $\Omega$, $\partial u/\artial n - i \mu(k,\epsilon) u = g$ on $\GammaN$, for some function $\mu$. Let $A, A_\epsilon$ denote the system matrices for discretizations of (1) and (2) respectively, and let $B_\eps^{-1}$ denote any (practically useful) approximate inverse for $A_\eps$. It is easy to see that sufficient conditions for $B_\eps^{-1}$ to be a good GMRES preconditioner for $A$ are: (i) $A_\eps^{-1}$ should be a good preconditioner for $A$ and (ii) $B_\eps^{-1}$ should be a good preconditioner for $A_\epsilon$. It is generally observed that (i) holds if the ``absorption'' parameter $\epsilon>0$ is not taken too large, while (ii) holds (e.g. for geometric multigrid) provided $\epsilon$ is large enough. However there is no rigorous explanation for these observations. The first part of the talk will explore sufficient conditions on $\epsilon$ so that (i) holds. This uses techniques from PDE analysis of (1) and (2) in the high frequency case, in particular the application or Morawetz multiplier theory. These theoretical tools allow matrix estimates applicable in numerical linear algebra. In the second part of the talk we consider requirement (ii), analysing the case when $B_\epsilon^{-1}$ is defined by classical additive Schwarz domain decomposition methods. The analysis here is quite different to the classical analysis of Cai and Widlund, which does not allow $k$ to become large. Here we use a coercivity argument in the natural $k-$dependent energy norm to estimate the field of values of the preconditioned matrix. This analysis holds for $k$ arbitrarily large. The analysis shows that there is a gap between the ranges of $\epsilon$ which ensure conditions (i) and (ii). Practical exploration of the performance of the solver in the gap suggests that efficient algorithms can still be constructed for solving this problem with high wavenumber $k$ using several variants of classical domain decomposition methods. New directions for future analysis are also suggested by the experiments. This is joint work with Paul Childs (Schlumberger Gould Research, Cambridge), Martin Gander (Geneva), Euan Spence (Bath), Douglas Shanks (Bath) and Eero Vainikko (Tartu).

Johnny Guzman: On the accuracy of finite element approximations to a class of interface problems

We consider piecewise linear approximations to a class of interface problems were the jump of the solution and its normal derivative are prescribed on the interface. We define a simple finite element method that that corrects the right hand side of the natural finite element method for this problem to render it second order accurate. Nearly second order accuracy is proved on general quasi-uniform triangular meshes. Although the natural method is far from optimal near the interface, we show that it is optimal for points that are sqrt{log(1/h) h} away from the interface. This is joint work with Manuel Sanchez-Uribe and Marcus Sarkis.

Ralf Hiptmair: Plane Wave Discontinuous Galerkin Methods I-II

This series of lectures reviews the development of convergence theory for a special class of Trefftz-type discontinuous Galerkin (TDG) methods that rely on plane waves for approximating solutions of the homogeneous Helmholtz equation −∆u − ω2u = 0 locally. These methods have been designed as a cure for the notorious pollution effect that haunts standard low-order Galerkin schemes for the simulation of wave propagation. The development started with the so-called ultra-weak variational formulation (UWVF) due to Cessenat and Despres [2,3], which was introduced in the form of a variational problem for functions on the mesh skeleton. More than a decade passed until it was realized in [1,6] that this method can be viewed as a rather standard discontinuous Galerkin (DG) method using local trial spaces spanned by plane waves, a plane wave discontinuous Galerkin method (PWDG). This paved the way for a comprehensive convergence analysis of the h-version of the method. Unfortunately, the h-version still suffers from the pollution effect [5]. The analysis of the p-version of PWDG could be advanced in [7], based on techniques borrowed from least squares methods [18]. Of course, here p counts the number of local plane waves. Together with new approximation estimates for plane waves [16, 17], this allowed detailed a priori predictions of convergence. This initial theory covered only convex domains and could not accommodate locally refined meshes, which is very unfortunate, because numerical experience [11–13] suggests that PWDG should be used on such meshes. Sloppily speaking, the the sophisticated hp-refinement strategy that ensures exponential convergence (in the number of degrees of freedom) for classical polynomial Galerkin finite element approximation of second-order elliptic boundary value problem should also be adopted for PWDG. Until recently, in the DG context, only polynomial theory could cover this setting [4, 14], but it remained outside the scope of existing TDG theory. Only in [9] asymptotic quasi-optimality of PWDG solutions could be established assuming merely shape-regular families of meshes. Still, these estimates were too weak to yield exponential convergence. It took sophisticated approximation theory for harmonic polynomials from [10], analytic elliptic regularity theory developed by M. Melenk [15], and the clever use of weighted norms to accomplish the proof of exponential convergence of hp-PWDG for the Helmholtz equation in 2D on domains with piecewise analytic boundaries [8]. This presentation will be supplemented by the lecture of Andrea Moiola, dedicated to approximation properties of plane wave spaces, and that of Ilaria Perugia, which will highlight TDG results for the time-harmonic Maxwell equations. With C. Gittelson(2), R. Hiptmair(1), A. Moiola(3), and I. Perugia(4) (1) Seminar for Applied Mathematics, ETH Zu ̈rich, e–mail:, (2) Neue Kantonsschule Aarau, CH-5000 Aarau, Switzerland, e–mail:, (3) Department of Mathematics and Statistics, University of Reading, Whiteknights, Berkshire RG6 6AX, UK, e-mail:, (4) Faculty of Mathematics, University of Vienna, 1090 Vienna, Austria, e-mail: References [1] A. Buffa and P. Monk, Error estimates for the ultra weak variational formulation of the Helmholtz equation, Math. Mod. Numer. Anal., 42 (2008), pp. 925–940. [2] O. Cessenat and B. Despre ́s, Application of an ultra weak variational formula- tion of elliptic PDEs to the two-dimensional Helmholtz equation, SIAM J. Numer. Anal., 35 (1998), pp. 255–299. [3] O. Cessenat and B. Despres, Using plane waves as base functions for solving time harmonic equations with the ultra weak variational formulation, J. Computational Acoustics, 11 (2003), pp. 227–238. [4] X. Feng and H. Wu, hp-discontinuous Galerkin methods for the Helmholtz equation with large wave number, Math. Comp., 80 (2011), pp. 1997–2024. [5] C. Gittelson and R. Hiptmair, Dispersion analysis of plane wave discontinu- ous Galerkin methods, Tech. Rep. 2012-42, Seminar for Applied Mathematics, ETH Zu ̈rich, Switzerland, 2012. To appear in International Journal of Numerical Methods In Engineering. [6] C. Gittelson, R. Hiptmair, and I. Perugia, Plane wave discontinuous Galerkin methods: Analysis of the h-version, Math. Model. Numer. Anal., 43 (2009), pp. 297– 331. [7] R. Hiptmair, A. Moiola, and I. Perugia, Plane wave discontinuous Galerkin methods for the 2d Helmholtz equation: Analysis of the p-version, SIAM J. Numer. Anal., 49 (2011), pp. 264–284. [8] R. Hiptmair, A. Moiola, and I. Perugia, Plane wave discontinuous Galerkin methods: Exponential convergence of the hp-version, Report 2013-31, SAM, ETH Zu ̈rich, Switzerland, 2013. Submitted to Found. Comput. Math. [9] R. Hiptmair, A. Moiola, and I. Perugia, Trefftz discontinuous Galerkin methods for acoustic scattering on locally refined meshes, Appl. Num. Math., 79 (2013), pp. 79–91. [10] R. Hiptmair, A. Moiola, I. Perugia, and C. Schwab, Approximation by har- monic polynomials in star-shaped domains and exponential convergence of Trefftz hp-DGFEM, Math. Modelling Numer. Analysis, 48 (2014), pp. 727–752. [11] T. Huttunen, M. Malinen, and P. Monk, Solving Maxwell’s equations using the ultra weak variational formulation, J. Comp. Phys., 223 (2007), pp. 731–758. [12] T. Huttunen and P. Monk, The use of plane waves to approximate wave propa- gation in anisotropic media, J. Computational Mathematics, 25 (2007), pp. 350–367. [13] T. Huttunen, P. Monk, and J. Kaipio, Computational aspects of the ultra-weak variational formulation, J. Comp. Phys., 182 (2002), pp. 27–46. [14] J. Melenk, A. Parsania, and S. Sauter, General DG-methods for highly indef- inite Helmholtz problems, Journal of Scientific Computing, (2013), pp. 1–46. [15] J. M. Melenk, hp-finite element methods for singular perturbations, vol. 1796 of Lecture Notes in Mathematics, Springer-Verlag, Berlin, 2002. [16] A. Moiola, R. Hiptmair, and I. Perugia, Plane wave approximation of homo- geneous Helmholtz solutions, ZAMP, 62 (2011), pp. 809–837. [17] , Vekua theory for the Helmholtz operator, ZAMP, 62 (2011), pp. 779–807. [18] P. Monk and D. Wang, A least squares method for the Helmholtz equation, Computer Methods in Applied Mechanics and Engineering, 175 (1999), pp. 121–136.

Paul Houston: hp-Version Discontinuous Galerkin Methods on Polygonal and Polyhedral Meshes

In this talk we consider the hp-version interior penalty discontinuous Galerkin method for the discretization of second--order elliptic partial differential equations on general computational meshes consisting of polygonal/polyhedral elements. By admitting such general meshes, this class of methods allows for the approximation of problems posed on computational domains which may contain a huge number of local geometrical features, or micro-structures. While standard numerical methods can be devised for such problems, the computational effort may be extremely high, as the minimal number of elements needed to represent the underlying domain can be very large. In contrast, the minimal dimension of the underlying (composite) finite element space based on general polytopic meshes is independent of the number of geometric features. Here we consider both the a priori and a posteriori error analysis of this class of methods, as well as their application within Schwarz-type domain decomposition preconditioners. This is joint work with Paola Antonietti (MOX, Milan), Andrea Cangiani (Leicester), Manolis Georgoulis (Leicester) and Stefano Giani (Durham).

Thomas Hughes: Isogeometric Analysis: Introduction and recent developments I-II

Designs are encapsulated in CAD (Computer Aided Design) systems and simulation is performed in FEA (Finite Element Analysis) systems. FEA requires the conversions of CAD descriptions to analysis-suitable formats, leading to finite element meshes. The conversion process involves many steps, is tedious and labor intensive, and is the major bottleneck in the engineering design-through-analysis process, accounting for more than 80% of overall analysis time. This is a major impediment to the product development cycle. The technical objectives are to create a new framework, simultaneously suitable for both design and analysis, and eliminate the bottleneck thereby, and leverage this framework to develop fundamentally new and improved computational mechanics methodologies to efficiently solve vexing problems. The key concept utilized is a new paradigm termed Isogeometric Analysis (IGA), based on rich geometric descriptions originating in CAD, resulting in one geometric model that is suitable for both design and analysis. In the few short years since its inception [1], IGA has become a focus of research within both the fields of FEA and CAD. For further background, see [2]. The purpose of this talk is to introduce and review recent progress toward developing IGA procedures that do not involve traditional mesh generation and geometry clean-up steps, that is, the CAD file is directly utilized as the analysis input file, to summarize some of the mathematical developments within IGA that confirm the superior accuracy and robustness of spline-based approximations compared with traditional FEA, and to present some applications of IGA technology to problems of solids, structures and fluids that illustrate its advantages. References [1] T.J.R. Hughes, J.A. Cottrell and Y. Bazilevs, Isogeometric Analysis: CAD, Finite Elements, NURBS, Exact Geometry and Mesh Refinement, Computer Methods in Applied Mechanics and Engineering, 194, (2005) 4135-4195. [2] J.A. Cottrell, T.J.R. Hughes and Y. Bazilevs, Isogeometric Analysis: Toward Integration of CAD and FEA, Wiley, Chichester, U.K., 2009.

Max Jensen: A Finite Element Method for Hamilton-Jacobi-Bellman equations

Hamilton-Jacobi-Bellman equations describe how the cost of an optimal control problem changes as problem parameters vary. This talk will address how Galerkin methods can be adapted to solve these equations efficiently. In particular, it is discussed how the convergence argument by Barles and Souganidis for finite difference schemes can be extended to Galerkin finite element methods to ensure convergence to viscosity solutions. A key question in this regard is the formulation of the consistency condition. Due to the Galerkin approach, coercivity properties of the HJB operator may also be satisfied by the numerical scheme. In this case one achieves besides uniform also strong $H^1$ convergence of numerical solutions on unstructured meshes.

Robert Kirby: Bernstein polynomials and finite element algorithms

Bernstein polynomials form a nonnegative, partition of unity, rotationally invariant, simplicial polynomial basis basis for polynomials on the simplex. What is more, they possess very special structure that allows optimal-complexity algorithms for the evaluation and application of finite element operators. In this talk, I will survey these results and also show recent applications to the construction of spectrally efficient algorithms for the de Rham complex on one hand and discontinuous Galerkin methodson the other.

Omar Lakkis: NVFEM: a Galerkin method for (fully) nonlinear elliptic equations

Nonlinear elliptic equations (meaning genuinely fully nonlinear ones and abbreviated FNE's) have been somewhat overlooked by the computational mathematics researchers until fairly recently. Oliker and Prussner (1988) gave a first attempt to solve the Monge--Ampère equation, which is to FNE's what the Poisson equation for linear elliptic equations is. Since then in parallel with the boom in analysis of viscosity solutions for FNE's (the weak-divergence-variational Sobolev framework being inadequate) there has been a steady development of monotone finite difference schemes based on the maximum-principle starting with Trudinger--Kuo in the early nineties and more recently with the work of Benamou, Oberman and Froese. On the Galerkin side progress was much slower and started picking up only ten years ago, most notably,with the work of Glowinski & Dean (2005), Feng & Neilan (2010), Böhmer (2010) and Davydov & Saeed (2012). Against this background, Pryer and Lakkis have introduced a more direct approach based on ``Hessian recovery'' and known as the nonvariational finite element method (NVFEM). NVFEM turns out to be quite flexible and allows the nonlinear solver to go beyond the Monge-Ampère framework and cover a wider class of equations. It also has distinctive features such as (1) being commutative with respect to Newton’s method, i.e., discretize-linearize is equivalent to linearize-discretize in many cases and (2) to demand very little from the types of meshes. I will discuss these developments first, then look at the NVFEM and explain how it can be applied to solve general classes of FNE's. I will close by looking at adaptive refinement techniques, which is the gives the edge to Galerkin methods whereas finite differences work well on uniform meshes and smooth solutions. I will illustrate the work with applications.

Konstantin Lipnikov: The mimetic finite difference method for elliptic problems I-II

The mimetic finite difference (MFD) method preserves important mathematical and physical properties of underlying PDEs, such as conservation laws, symmetry and positivity of a solution, and fundamental identities of vector and tensor calculus. This talk will describe history, fundamentals, and recent developments of the mimetic finite difference method for elliptic PDEs. The MFD method lies between the finite volume and finite element methods. Like the finite volume method, the MFD method works on arbitrary polygonal, polyhedral and generalized polyhedral meshes. Like the finite element method, it readily handles tensorial coefficients and enforces duality relationships between discrete operators (e.g. divergence and gradient). Combining the best of the two worlds, the MFD method introduces a few unique features. To highlight these features, I'll discuss relationships between a few compatible discretization methods related to the MFD method.

Charalambos Makridakis: Finite Elements and Multiscale Modelling in Crystalline Materials

In this talk we discuss how finite element methods can be useful in the analysis and multiscale modeling design of crystalline materials. We present a new finite element consistency analysis of Cauchy--Born approximations to atomistic models arising in the modeling of crystalline materials in two and three space dimensions. Then we construct energy based numerical methods free of ghost forces in two and three-dimensional lattices modeled by pair interaction potentials. The analysis hinges on establishing a connection of the coupled system to conforming finite elements. Key ingredients are: (i) a new representation of discrete derivatives related to long range interactions of atoms as volume integrals of gradients of piecewise linear functions over bond volumes, and (ii) the construction of an underlying globally continuous function representing the coupled modeling method.

Gianmarco Manzini: Nonconforming mimetic methods for diffusion problems

In this talk, we present a new family of mimetic/virtual element schemes for solving elliptic partial differential equations in the primal form on unstructured polygonal and polyhedral meshes. These mimetic discretizations are built to satisfy local consistency and stability conditions. The consistency condition is an exactness property, i.e., these schemes are exact when the solution is a polynomial of an assigned degree. On its turn, the stability condition enforces the coercivity of the discrete bilinear form and, eventually, the well-posedness of the resulting mimetic scheme. Extension of these schemes to three dimensions requires the construction of high-order quadrature rules for polygonal faces of polyhedral cells [3]. Such quadrature rules are not available for an arbitrary polygon and their numerical construction will make the method too expensive. To resolve this issue, we adopt a special choice of the degrees of freedom [4]. Instead of using nodal degrees of freedom, which may be associated with either the mesh vertices and other special nodes on the cell interfaces [2], we use solution moments on faces and inside cells. The construction requires to calculate moments of only polynomial functions which is a problem with a well-known solution. Higher order schemes are built using higher order moments. These new mimetic schemes are suitable to the numerical approximation of two- and three-dimensional elliptic problems at any order of accuracy on an arbitrary polygonal or polyhedral mesh. The developed schemes are verified numerically on diffusion problems with constant and spatially variable (possibly, discontinuous) tensorial coefficients. We establish the equivalence of the family of mimetic finite difference methods with a virtual element method [1,2], which allows us to perfom the error analysis. Bibliography [1] B. Ayuso de Dios, K. Lipnikov, G. Manzini. The nonconforming virtual element method. arXiv:1405.374 [math.NA], 2014. [2] L. Beirao da Veiga, F. Brezzi, A. Cangiani, G. Manzini, L. D. Marini, and A. Russo. Basic principles of virtual element methods. Math. Models Methods Appl. Sci., 23:119--214, 2013. [3] L. Beirao da Veiga, K. Lipnikov, and G. Manzini. Arbitrary-order nodal mimetic discretizations of elliptic problems on polygonal meshes. SIAM J. Numer. Anal., 49(5):1737--1760, 2011. [4] K. Lipnikov and G. Manzini. A high-order mimetic method on unstructured polyhedral meshes for the diffusion equation. J. Comput. Phys., 272:360-385, 2014.

Andrea Moiola: Approximation by plane and circular waves

The solutions of time-harmonic boundary value problems at high frequencies are strongly oscillatory and their approximation by piecewise polynomials may require an extremely large number of degrees of freedom. For this reason, several modern finite element methods use basis functions that are piecewise solution of the underlying PDE; see the lectures of Ralf Hiptmair for the case of the Helmholtz equation, and that of Ilaria Perugia for the Maxwell equations. The convergence analysis of h-, p-, and hp-versions of these schemes (often called "Trefftz methods") requires the proof of new best approximation estimates. We consider the approximation of solutions of the homogeneous Helmholtz equation by finite dimensional spaces of plane, circular and spherical waves (Fourier-Bessel functions). The Vekua transform, a bijective integral operator that maps Helmholtz solutions into harmonic functions defined on the same domain, allows to reduce this problem to the approximation of harmonic functions by harmonic polynomials. For arbitrary two-dimensional star-shaped elements, these bounds are fully explicit; the domain shape comes into play only through few simple geometric parameters. In two and three dimensions, we obtain best approximation estimates with high orders of convergence in the element size and in the dimension of the discrete space used (hp-estimates). The extension to electromagnetic and elastic waves is also considered. This is a joint work with Ralf Hiptmair, Ilaria Perugia and Christoph Schwab.

Peter Monk: Optimizing thin-film solar photovoltaic devices

This talk will describe a multi-disciplinary investigation of thin-film photovoltaic devices. The project revolves around an innovative multiplasmonic thin-film design for a solar cell and related optical components. It involves scientists at two universities including experts on fabrication and theory, as well as numerical analysts. I will discuss some of the challenges and pleasures of working in such an interdisciplinary team. One goal of the project is to design and fabricate a surface-multiplasmonic solar cell. After describing a little of the physics, it will become apparent that what is needed is to simulate complex grating structures and optimize geometric and material properties to enhance the absorption of solar energy. I will describe two approaches to simulation: the Rigorous Coupled-Wave Analysis (RCWA) a Fourier based approach, and a coupled finite element and spectral approach which have complementary strengths. Since the cost functional is very complicated and likely to have many local minima, we use a differential evolution algorithm to optimize the structures. Several examples will be presented.

David Mora: A Virtual Element Method for a Steklov eigenvalue problem.

The aim of this talk is to develop a virtual element method for the Steklov eigenvalue problem. We introduce a variational formulation and establish that its solutions are related with the eigenpairs of a compact operator. We propose a discretization by means of the virtual element method presented in [L. Beir\~ao da Veiga et al., Math. Models Methods Appl. Sci., 23 (2013), pp. 199--214]. Using general assumptions on the computational domain, we establish that the resulting scheme provides a correct approximation of the spectrum and prove optimal order error estimates for the eigenfunctions and a double order for the eigenvalues. We also prove higher-order error estimates for the computation of the eigensolutions on the boundary, which in sloshing problems is the quantlty of main interest (the free surface of the liquid). Finally, we report some numerical tests supporting our theoretical results.

Pedro Morin: A posteriori error estimators for weighted norms. Adaptivity for point sources and local errors

We develop a posteriori error estimates for general second order elliptic problems with point sources in two- and three-dimensional domains. We prove a global upper bound and a local lower bound for the error measured in a weighted Sobolev space. The weight belongs to the Muckenhoupt's class $A_2$. The purpose of the weight is twofold. On the one hand it weakens the norm around the singularity, and on the other hand it strengthens the norm in a region of interest, to obtain localized estimates. The theory hinges on local approximation properties of either Cl\'ement or Scott-Zhang interpolation operators, without need of suitable modifications, and makes use of weighted estimates for fractional integrals and maximal functions. Numerical experiments illustrate the excellent performance of an adaptive algorithm with the obtained error estimators.

Ignacio Muga: DPG Strategies for the Helmholtz Equation

We apply the discontinuous Petrov-Galerkin (DPG) method to the Helmholtz equation. Several strategies can be made depending on the variational formulation, the test space and the way we norm the test space. For instance, if we use a scaled graph norm in the test space, we find that better results are achieved, under some circumstances, as the scaling parameter approaches the limiting value of zero. We provide an analytical understanding of this phenomenon. We also perform a dispersion analysis on the multiple interacting stencils that form this DPG strategy in its lowest order setting. The analysis shows that the discrete wavenumbers of the method are complex, explaining the numerically observed artificial dissipation in the computed wave approximations. Since every DPG method is a nonstandard least-squares Galerkin method, its performance is compared with a standard least-squares, and other methods having a similar stencil size.

Nilima Nigam: Pyramidal finite elements

Pyramidal elements can arise as 'glueing' elements in meshes consisting of both tetrahedral and hexahedral elements. We present the construction of two families of high-order conforming finite elements for pyramidal elements, which are compatible with adjacent (tetrahedral or hexahedral) elements. These families satisfy the commuting diagram property, ensuring the stability of mixed finite element discretizations. We demonstrate, in particular, that the use of rational basis functions cannot be avoided. The analysis of errors due to quadrature is non-standard for these elements, and we describe the key ideas. Specifically, even though one has quadrature rules which exactly integrate the basis functions, the analysis of the quadrature-related variational crime must be performed with care. If time permits, we shall present some new developments towards a family of serendipity elements on the pyramid. This work is joint with Joel Phillips and Argyrios Petras, and was motivated in large part by discussions with Prof. Leszek Demkowicz.

Halvor Nilsen: Practical challenges faced when using modern approaches to numerical PDEs to simulate petroleum reservoirs

A primary challenge in reservoir simulation is the geometrical complexity seen in high-fidelity models, which typically have unstructured connections and irregular cell geometries with (very) high aspect ratios. Discretization methods should therefore handle general polyhedral cells and a simple, generic implementation is almost essential. Another important aspect is that models should be robust with respect to different sets of boundary and initial conditions. In simple models, the coupled set of equations can be decomposed into a flow equation governing fluid pressure, which has more or less elliptic character, and a set of transport equations for phases and/or components that have hyperbolic character. In real simulation, however, this division is not clear because of strong couplings between the flow and transport. Numerical methods that handle both elliptic, hyperbolic, and parabolic equations in a simple manner are therefore strongly favorable. Methods capable of handling discontinuities in all material parameters are also essential since the macroscopic equations typically are formulated in an averaged sense on a large scale. Particular challenges in this direction are discontinuous and anisotropic absolute and relative permeability, as well as discontinuous capillary pressure. We will try to discuss the above challenges in terms of practical examples with particular emphasis on implementation. The widely used TPFA method, which is not consistent, will be compared with the consistent MPFA and MIMETIC methods. We will also discuss advantages and challenges with using higher-order methods seen from our perspective, in particular for system with strong hyperbolic character. For all the challenges it is important to balance the need for accurate solution to the accuracy of the model itself, which often means that only the lowest-order methods can be justified.

Ilaria Perugia: Trefftz-Discontinuous Galerkin Methods for Maxwell's Equations

Several finite element methods used in the numerical discretization of wave problems in frequency domain are based on incorporating a priori knowledge about the differential equation into the local approximating spaces by using Trefftz-type basis functions, namely functions which belong to the kernel of the considered differential operator. These methods differ form one another not only for the type of Trefftz basis functions used in the approximating spaces, but also for the way of imposing continuity at the interelement boundaries: partition of unit, least squares, Lagrange multipliers or discontinuous Galerkin techniques (see the lectures of Ralf Hiptmair for Trefftz methods for the Helmholtz equation, and that of Andrea Moiola for the approximation properties of Trefftz finite element spaces). In this talk, the construction of Trefftz-discontinuous Galerkin methods for the time-harmonic Maxwell equations, together with their abstract error analysis, will be presented. This analysis requires new stability estimates and regularity results for the continuous problem which can be of interest on their own. Some ideas on the time-dependent case will also be given. This is a joint work with Ralf Hiptmair and Andrea Moiola.

Daniel Peterseim: Efficient and reliable numerical homogenization beyond scale separation

This talk summarizes some recent results on (semi-)linear elliptic multiscale problems in the absence of strong assumptions such as periodicity or scale separation. I will propose and analyze a new approach for numerical homogenization that is based on the (pre-)computation of roughly H^(-d) local fine scale problems on patches of size H log(1/H), where H is the coarse mesh size. The moderate overlap of the patches yields efficiency and suffices to prove the textbook convergence of the coarse scale Galerkin method without any pre-asymptotic or resonance effects. The method is related to the variational multiscale method and a key result of our error analysis is the proof of the exponential decay of the corresponding fine scale Green's function. Among the applications of the new method is the acceleration of solvers for linear and non-linear eigenvalue problems.

Alessandro Russo: Virtual Element Methods for general elliptic equations

In my talk I will address the problem of approximating a general elliptic second order partial differential equation with the Virtual Element Method. Both theoretical and numerical results will be presented.

Giancarlo Sangalli: An isogeometric method for linear nearly-incompressible elasticity with local stress projection

In this talk, we propose an isogeometric method for solving the linear nearly-incompressible elasticity problem. The method is similar to the $\bar B$ formulation where the volumetric strain is projected on a lower degree spline space in order to prevent volumetric locking. In our method, we adopt a local projection onto macro-elements, that are chosen in order to guarantee optimal convergence. Moreover the locality of the projector allow to maintain the sparsity of the stiffness matrix, that is, the efficiency of the method. The analysis of the method is based on the inf-sup stability of the associated mixed formulation via a macro-element technique for spline functions. The numerical tests confirm the theory of the method

Robert Scheichl: Rigorous Numerical Upscaling of Elliptic Multiscale Problems at High Contrast

We discuss the possibility of numerical upscaling for elliptic problems with rough diffusion coefficient at high contrast. Within the general framework of variational multiscale methods, we present a new approach based on novel quasi-interpolation operators with local approximation properties in L2 independent of the contrast. These quasi-interpolation operators have first been developed in the context and used in the analysis of robust domain decomposition methods. The analysis uses novel weighted Poincare inequalities and an abstract Bramble-Hilbert lemma. We show that for some relevant classes of high-contrast coefficients, optimal convergence without pre-asymptotic effects caused by microscopic scales or by the high contrast in the coefficient is possible. Ideas on how to extend the method and the analysis to more general coefficients will be discussed. Classes of coefficients that remain critical are characterized via numerical experiments.

Chi-Wang Shu: Discontinuous Galerkin method for hyperbolic equations with delta-singularities I-II

Discontinuous Galerkin (DG) methods are finite element methods with features from high resolution finite difference and finite volume methodologies and are suitable for solving hyperbolic equations with nonsmooth solutions. In this talk we will describe our recent work on the study of DG methods for solving hyperbolic equations with $\delta$-singularities in the initial condition, in the source term, or in the solutions. For such singular solutions, many numerical techniques rely on modifications with smooth kernels and hence may severely smear such singularities, leading to large errors in the approximation. On the other hand, the DG methods are based on weak formulations and can be designed directly to solve such problems without modifications, leading to very accurate results. We will discuss both error estimates for model linear equations, in negative norm and in strong norm after post-processing, and applications to nonlinear systems including the rendez-vous systems and pressureless Euler equations involving $\delta$-singularities in their solutions. For the nonlinear case a high order accuracy bound-preserving limiter is crucial to maintain nonlinear stability and to avoid blowups of the numerical solution. This is joint work with Yang Yang, Dongming Wei and Xiangxiong Zhang.

Benjamin Stamm: A posteriori estimates for discontinuous Galerkin methods using non-polynomial basis functions

Our final goal is to derive a posteriori error estimates for the Adaptive Local Basis (ALB) method that is used solving non-linear eigenvalue problems in the framework of Kohn-Sham models in computational chemistry. The characteristics of the ALB-method is that it constructs local basis functions by diagonalizing the operator locally and then solving the global eigenvalue problem using the Discontinuous Galerkin (DG) technique. We start with analyzing the a posteriori error estimates for the DG-method applied to Laplace's equation but where the nature of the basis functions are unknown and then enrich the differential operator by adding a potential to the Laplace operator. Understanding how to deal with this equation is the key for understanding the eigenvalue-problem. The main challenge is that no inverse estimates are available for generic basis functions. To overcome this, we accept computations on a very fine grid on each element of the DG-method, but they should remain local and independent. We then compute local constants that are subsequently used in the error estimates. Finally, we present numerical examples that illustrate the behavior of the estimates.

Gantumur Tsogtgerel: On approximation classes of adaptive finite element methods

Recent studies on convergence of adaptive methods have shown that generally these methods converge at class-optimal rates with respect to approximation classes that are defined using a modified notion of error, the so-called total error, which is the energy error plus an oscillation term. In this talk, we present characterizations of those approximation classes in terms of memberships of the solution and data into Besov spaces. We will also discuss some modest improvements over the existing characterization results for the standard adaptive approximation classes (that are defined using the energy error). If time permits, we will go into possible extensions of these results to finite element exterior calculus.

Frédéric Valentin: Multiscale Hybrid-Mixed Finite Element Method

This work presents an overview of a new family of finite element methods for multiscale problems, named Multiscale Hybrid-Mixed (MHM) methods. MHM methods are a consequence of a hybridization procedure which caracterize the unknowns as a direct sum of a ``coarse'' solution and the solutions to problems with Neumann boundary conditions driven by the multipliers. As a result, the MHM method becomes a strategy that naturally incorporates multiple scales while providing solutions with high-order precision for the primal and dual variables. The completely independent local problems are embedded in the upscaling procedure, and computational approximations may be naturally obtained in a parallel computing environment . Also interesting is that the dual variable preserves the local conservation property using a simple post-processing of the primal variable. Well-posedness and best approximation results for the one- and two-level versions of the MHM method show that the method achieves optimal convergence with respect to the mesh parameter and is robust in terms of (small) physical parameters. Also, a face-based a posteriori estimator is shown to be locally efficient and reliable with respect to the natural norms. The general framework is illustrated for the Darcy and the linear elasticity equations, then further extended to reactive-advective-diffusive problems. Numerical results verify the optimal convergence properties as well as a capacity to accurately incorporate heterogeneity and high-contrast coefficients, showing in particular the great performance of the new a posteriori error estimator in driving mesh adaptativity. We conclude that the MHM method, along with its associated a posteriori estimator, is naturally shaped to be used in parallel computing environments and appears to be a highly competitive option to handle realistic multiscale boundary value problems with precision on coarse meshes.