Workshop A1 - Approximation Theory


Organizers: Albert Cohen (Université de Paris 6, France) - Ron Devore (Texas A&M University, USA) - Peter Binev (University of South Carolina, USA)

View workshop abstracts PDF

 

Talks


July 10, 14:30 ~ 15:20

Nonlinear $n$-term Approximation of Harmonic Functions from Shifts of the Newtonian Potential

Pencho Petrushev

University of South Carolina, USA   -   pencho@math.sc.edu

A basic building block in Classical Potetial Theory is the fundamental solution of the Laplace equation in ${\mathbb R}^d$ (Newtonian potential). Our main goal is to study the rates of nonlinear $n$-term approximation of harmonic functions on the unit ball $B^d$ from shifts of the Newtonian potential with poles outside $\overline{B^d}$ in the harmonic Hardy spaces. Optimal rates of approximation are obtained in terms of harmonic Besov spaces. The main vehicle in establishing these results is the construction of highly localized frames for Besov and Triebel-Lizorkin spaces on the sphere whose elements are linear combinations of a fixed number of shifts of the Newtonian potential.

Joint work with Kamen Ivanov (Bulgarian Academy of Sciences).

View abstract PDF


July 10, 15:30 ~ 16:20

Goedel and Turing in approximation theory - On sampling, reconstruction, sparsity and computational barriers

Anders Hansen

University of Cambridge, United Kingdom   -   ach70@cam.ac.uk

The problem of approximating a function from a finite set of samples of linear measurements has been a core part of approximation theory over the last decades. The key issue is to design a decoder that takes the samples and produces an approximation to the function in an accurate and robust way. When the decoder is designed, one is then faced with the problem of finding a robust algorithm that can compute the output of the decoder or at least an approximation to it. And herein lies the crucial question: do such algorithms always exist? For linear decoders the answer is always yes, however, for non-linear decoders we show that the answer is often no. Moreover, and perhaps surprisingly, the answer is no for many of the popular non-linear decoders that are used in practice. In fact, a positive answer to the question would imply decidability of famous non-decidable problems such as the Halting problem, and hence the problems are formally non-computable. In this talk I will discuss this paradox and demonstrate how, by adding sparsity as a key ingredient, one can assure the existence of robust algorithms in non-linear approximation theory, and explain the success of their use in practice. Moreover, I will show how the above paradox opens up for a rich classification theory of non-linear decoders that contains many surprises.

Joint work with Alex Bastounis (University of Cambridge), Laura Terhaar (University of Cambridge) and Verner Vlacic (University of Cambridge).

View abstract PDF


July 10, 17:00 ~ 17:35

Tensorization for data-sparse approximation

Lars Grasedyck

RWTH Aachen University, Germany   -   lgr@igpm.rwth-aachen.de

In this talk we will review techniques for the tensorization of functions or vectors that are given as low dimensional objects. The tensorization artificially casts them into high dimensional objects where we are able to apply low rank tensor approximation techniques. The rationale behind this is the fact that low rank tensors can be represented with a complexity linear in the dimension, thus allowing a logarithmic complexity for the representation of objects that --- when stored in a naive way --- would require a linear complexity. We characterize the approximability in terms of subspaces and give examples for classes of functions that allow for such a compression of data.

View abstract PDF


July 10, 17:40 ~ 18:15

Rescaled Pure Greedy Algorithm for Hilbert and Banach spaces and beyond

Guergana Petrova

Texas A&M University, USA   -   gpetrova@math.tamu.edu

We show that a very simple modi cation of the Pure Greedy Algorithm for approximating functions by sparse sums from a dictionary in a Hilbert or more generally a Banach space has optimal convergence rates. Moreover, this greedy strategy can be applied to convex optimization in Banach spaces. We prove its convergent rates under a suitable behavior of the modulus of uniform smoothness of the objective function and test our algorithm on several data sets involving the log-likelihood function for logistic regression.

Joint work with Zheming Gao (University of North Carolina).

View abstract PDF


July 10, 18:20 ~ 18:55

Data assimilation and sampling in Banach spaces

Przemek Wojtaszczyk

ICM University of Warsaw, Poland   -   wojtaszczyk@icm.edu.pl

We present a new framework to interpolation in general Banach spaces. It is based on recent study on data assimilation in the context of reduced basis method see Binev, P., Cohen, A., Dahmen, W., DeVore, R., Petrova, G., Wojtaszczyk, P.: {\em Data Assimilation in Reduced Modeling}, SIAM UQ. Our framework unifies in particular the earlier approaches to sampling using point values and Fourier coefficients.

Joint work with Ron DeVore(Texas A&M University, USA) and Guergana Petrova(Texas A&M university, USA).

View abstract PDF


July 11, 14:30 ~ 15:20 - Room B3

Multiscale Methods for Dictionary Learning, Regression and Optimal Transport for data near low-dimensional sets

Mauro Maggioni

Johns Hopkins University, United States of America   -   mauro.maggioni@jhu.edu

We discuss a family of ideas, algorithms, and results for analyzing various new and classical problems in the analysis of high-dimensional data sets. These methods we discuss perform well when data is (nearly) intrinsically low-dimensional. They rely on the idea of performing suitable multiscale geometric decompositions of the data, and exploiting such decompositions to perform a variety of tasks in signal processing and statistical learning. In particular, we discuss the problem of dictionary learning, where one is interested in constructing, given a training set of signals, a set of vectors (dictionary) such that the signals admit a sparse representation in terms of the dictionary vectors. We then discuss the problem of regressing a function on a low-dimensional unknown manifold. For both problems we introduce a multiscale estimator, fast algorithms for constructing it, and give finite sample guarantees for its performance, and discuss its optimality. Finally, we discuss an application of these multiscale decompositions to the fast calculation of optimal transportation plans, introduce a multiscale version of optimal transportation distances, and discuss preliminary applications.

Joint work with Sam Gerber (University of Oregon), Wenjing Liao (Johns Hopkins University), Stefano Vigogna (Johns Hopkins University).

View abstract PDF


July 11, 15:30 ~ 16:20 - Room B3

Space-parameter-adaptive approximation of affine-parametric elliptic PDEs

Markus Bachmayr

Universität Bonn, Germany   -   bachmayr@ins.uni-bonn.de

We consider the approximation of PDEs with parameter-dependent coefficients by sparse polynomial approximations in the parametric variables, combined with suitable discretizations in the spatial domain. Here we focus on problems with countably many parameters, as they arise when coefficients with uncertainties are modelled as random fields. For the resulting fully discrete approximations of the corresponding solution maps, we obtain convergence rates in terms of the total number of degrees of freedom. In particular, in the case of affine parametrizations, we find that independent adaptive spatial approximation for each term in the polynomial expansion yields improved convergence rates. Moreover, we discuss new operator compression results showing that standard adaptive solvers for finding such approximations can be made to converge at near-optimal rates.

Joint work with Albert Cohen (UPMC Paris VI), Wolfgang Dahmen (RWTH Aachen), Dinh Dung (Vietnam National University), Giovanni Migliorati (UPMC Paris VI) and Christoph Schwab (ETH Zurich).

View abstract PDF


July 11, 17:00 ~ 17:35 - Room B3

Sparse approximation with respect to the Faber-Schauder system

Tino Ullrich

University of Bonn/University of Osnabrueck, Germany   -   tino.ullrich@hcm.uni-bonn.de

We consider approximations of multivariate functions using m terms from its tensorized Faber-Schauder expansion. The univariate Faber-Schauder system on $[0,1]$ is given by dyadic dilates and translates (in the wavelet sense) of the $L_\infty$ normalized simple hat function with support in $[0,1]$. We obtain a hierarchical basis which will be tensorized over all levels (hyperbolic) to get the dictionary $\mathcal{F}$. The worst-case error with respect to a class of functions $\mathbf{F} \hookrightarrow X$ is measured by the usual best m-term widths denoted by $\sigma_m(\mathbf{F},\mathcal{F})_X$, where the error is measured in $X$. We constructively prove the following sharp asymptotical bound for the class of Besov spaces with small mixed smoothness (i.e. $1/p < r < \min\{1/\theta-1,2\}$) in $L_\infty$

\[\sigma_m(\mathbf{B}^r_{p,\theta},\mathcal{F})_\infty \asymp m^{-r}\,. \]

Note, that this asymptotical rate of convergence does not depend on the dimension $d$ (only the constants behind). In addition, the error is measured in $L_\infty$ and to our best knowledge this is the first sharp result involving $L_\infty$ as a target space. We emphasize two more things. First, the selection procedure for the coefficients is a level-wise constructive greedy strategy which only touches a finite prescribed number of coefficients. And second, due to the use of the Faber-Schauder system, the coefficients are finite linear combinations of discrete Function values. Hence, this method can be considered as a nonlinear adaptive sampling algorithm leading to a pure polynomial rate of convergence for any $d$.

Joint work with Glenn Byrenheid (University of Bonn).

View abstract PDF


July 11, 17:40 ~ 18:15 - Room B3

Principal component analysis for the approximation of high-dimensional functions using tree-based low-rank formats

Anthony Nouy

Ecole Centrale de Nantes, France   -   anthony.nouy@ec-nantes.fr

We present an algorithm for the approximation of high-dimensional functions using tree-based low-rank approximation formats (tree tensor networks). A multivariate function is here considered as an element of a Hilbert tensor space of functions defined on a product set equipped with a probability measure, the function being identified with a multidimensional array when the product set is finite. The algorithm only requires evaluations of functions (or arrays) on a structured set of points (or entries) which is constructed adaptively. The algorithm is a variant of higher-order singular value decomposition which constructs a hierarchy of subspaces associated with the different nodes of a dimension partition tree and a corresponding hierarchy of interpolation operators. Optimal subspaces are estimated using empirical principal component analysis of interpolations of partial random evaluations of the function. The algorithm is able to provide an approximation in any tree-based format with either a prescribed rank or a prescribed relative error, with a number of evaluations of the order of the storage complexity of the approximation format.

[1] W. Hackbusch. Tensor spaces and numerical tensor calculus, volume 42 of Springer series in computational mathematics. Springer, Heidelberg, 2012.

[2] A. Nouy. Higher-order principal component analysis for the approximation of tensors in tree-based low-rank formats. arxiv preprint, 2017.

View abstract PDF


July 11, 18:20 ~ 18:55 - Room B3

Optimal Approximation with Sparsely Connected Deep Neural Networks

Gitta Kutyniok

Technische Universität Berlin, Germany   -   kutyniok@math.tu-berlin.de

Despite the outstanding success of deep neural networks in real-world applications, most of the related research is empirically driven and a mathematical foundation is almost completely missing. One central task of a neural network is to approximate a function, which for instance encodes a classification task. In this talk, we will be concerned with the question, how well a function can be approximated by a deep neural network with sparse connectivity. We will derive fundamental lower bounds on the connectivity and the memory requirements of deep neural networks guaranteeing uniform approximation rates for arbitrary function classes, also including functions on low-dimensional immersed manifolds. Additionally, we prove that our lower bounds are achievable for a broad family of function classes, thereby deriving an optimality result. Finally, we present numerical experiments demonstrating that the standard stochastic gradient descent algorithm generates deep neural networks providing close-to-optimal approximation rates at minimal connectivity. Moreover, surprisingly, these results show that stochastic gradient descent actually learns approximations that are sparse in the representation systems optimally sparsifying the function class the network is trained on.

Joint work with Helmut B\"olcskei (ETH Z\"urich, Switzerland), Philipp Grohs (Universit\"at Wien, Austria) and Philipp Petersen (Technische Universit\"at Berlin, Germany).

View abstract PDF


July 12, 14:30 ~ 15:20 - Room B3

Computing a Quantity of Interest from Observational Data

Simon Foucart

Texas A&M University, USA   -   foucart@tamu.edu

Scientific problems often feature observational data received in the form $w_1=l_1(f), \ldots$, $w_m=l_m(f)$ of known linear functionals applied to an unknown function $f$ from some Banach space~$\mathcal{X}$, and it is required to either approximate $f$ (the full approximation problem) or to estimate a quantity of interest $Q(f)$. In typical examples, the quantities of interest can be the maximum/minimum of $f$ or some averaged quantity such as the integral of $f$, while the observational data consists of point evaluations. To obtain meaningful results about such problems, it is necessary to possess additional information about $f$, usually as an assumption that $f$ belongs to a certain model class $\mathcal{K}$ contained in $\mathcal{X}$. This is precisely the framework of optimal recovery, which produced substantial investigations when the model class is a ball of a smoothness space, e.g. when it is a Lipschitz, Sobolev, or Besov class. This presentation is concerned with other model classes described by approximation processes. Its main contributions are: (i) for the estimation of quantities of interest, the production of numerically implementable algorithms which are optimal over these model classes, (ii) for the full approximation problem, the construction of linear algorithms which are optimal or near optimal over these model classes in case of data consisting of point evaluations. Regarding (i), when $Q$ is a linear functional, the existence of linear optimal algorithms was established by Smolyak, but the proof was not numerically constructive. In classical recovery settings, it is shown here that such linear optimal algorithms can be produced by constrained minimization methods, and examples involving the computations of integrals from the given data are examined in greater details. Regarding (ii), it is shown that linearization of optimal algorithms can be achieved for the full approximation problem, too, in the important situation where the $l_j$ are point evaluations and $\mathcal{X}$ is a space of continuous functions equipped with the uniform norm. It is also revealed how the quasi-interpolation theory allows for the construction of linear algorithms which are near optimal.

Joint work with Ron DeVore (Texas A&M University), Guergana Petrova (Texas A&M University) and Przemyslaw Wojtaszczyk (University of Warsaw).

View abstract PDF


July 12, 15:30 ~ 16:20 - Room B3

Stable Gabor Phase Retrieval and Spectral Clustering

Philipp Grohs

University of Vienna, Austria   -   philipp.grohs@univie.ac.at

We consider the problem of reconstructing a signal $f$ from its spectrogram, i.e., the magnitudes $|V_\varphi f|$ of its Gabor transform $$V_\varphi f (x,y):=\int_{\mathbb{R}}f(t)e^{-\pi (t-x)^2}e^{-2\pi i y t}dt, \quad x,y\in \mathbb{R}.$$ Such problems occur in a wide range of applications, from optical imaging of nanoscale structures to audio processing and classification. While it is well-known that the solution of the above Gabor phase retrieval problem is unique up to natural identifications, the stability of the reconstruction has remained wide open. The present paper discovers a deep and surprising connection between phase retrieval, spectral clustering and spectral geometry. We show that the stability of the Gabor phase reconstruction is bounded by the inverse of the \emph{Cheeger constant} of the flat metric on $\mathbb{R}^2$, conformally multiplied with $|V_\varphi f|$. The Cheeger constant, in turn, plays a prominent role in the field of spectral clustering, and it precisely quantifies the `disconnectedness' of the measurements $V_\varphi f$. It has long been known that a disconnected support of the measurements results in an instability -- our result for the first time provides a converse result in the sense that there are no other sources of instabilities. Due to the fundamental importance of Gabor phase retrieval in coherent diffraction imaging, we also provide a new understanding of the stability properties of these imaging techniques: Contrary to most classical problems in imaging science whose regularization requires the promotion of smoothness or sparsity, the correct regularization of the phase retrieval problem promotes the `connectedness' of the measurements in terms of bounding the Cheeger constant from below. Our work thus, for the first time, opens the door to the development of efficient regularization strategies.

Joint work with Martin Rathmair (University of Vienna).

View abstract PDF


July 12, 17:00 ~ 17:35 - Room B3

On the entropy numbers of the mixed smoothness function classes

Vladimir Temlyakov

University of South Carolina and Steklov Institute of Mathematics, United States   -   temlyakovv@gmail.com

Behavior of the entropy numbers of classes of multivariate functions with mixed smoothness is studied here. This problem has a long history and some fundamental problems in the area are still open. The main goal of this talk is to present a new method of proving the upper bounds for the entropy numbers. This method is based on recent developments of nonlinear approximation, in particular, on greedy approximation. This method consists of the following two steps strategy. At the first step we obtain bounds of the best $m$-term approximations with respect to a dictionary. At the second step we use general inequalities relating the entropy numbers to the best $m$-term approximations. For the lower bounds we use the volume estimates method, which is a well known powerful method for proving the lower bounds for the entropy numbers.

View abstract PDF


July 12, 17:40 ~ 18:15 - Room B3

Approximation of set-valued functions by adaptation of classical approximation operators to sets

Nira Dyn

School of Mathematical Sciences, Tel-Aviv University, Israel   -   niradyn@post.tau.ac.il

In this talk we consider approximation of univariate set-valued functions, mapping a closed interval to compact sets in a Euclidean space. Since the collection of such sets is not a vector space, the classical approximation operators have to be adapted to this setting. One way is to replace operations between numbers, by operations between sets. When the approximation error is measured in the Hausdorff metric, the operations between sets, designed by us, lead to error bounds expressed in terms of the regularity properties of the approxinated set-valued function.

An example of a possible application of the theory to the approximation of a 3D object from its parallel 2D cross-sections concludes the talk.

Joint work with Elza Farkhi (Tel-Aviv University, Israel) and Alona Mokhov (Afeka College, Tel-Aviv, Israel).

View abstract PDF


July 12, 18:20 ~ 18:55 - Room B3

Polynomial approximation of smooth, multivariate functions on irregular domains

Ben Adcock

Simon Fraser University, Canada   -   benadcock@sfu.ca

Smooth, multivariate functions defined on tensor domains can be approximated using simple orthonormal bases formed as tensor products of one-dimensional orthogonal polynomials.  On the other hand, constructing orthogonal polynomials on irregular domains is a difficult and computationally intensive task.  Yet irregular domains arise in many practical problems, including uncertainty quantification, model-order reduction, optimal control and numerical PDEs. In this talk I will introduce a method for approximating smooth, multivariate functions on irregular domains, known as polynomial frame approximation. Importantly, this method corresponds to approximation in a frame, rather than a basis; a fact which leads to several key differences, both theoretical and numerical in nature.  However, this method requires no orthogonalization or parametrization of the boundary, thus making it suitable for very general domains. I will discuss theoretical results for the approximation error, stability and sample complexity of this algorithm, and show its suitability for high-dimensional approximation through independence (or weak dependence) of the guarantees on the ambient dimension $d$. I will also present several numerical results, and highlight some open problems and challenges.

Joint work with Daan Huybrechs (K.U. Leuven), Juan Manuel Cardenas (Universidad de Concepcion) and Sebastian Scheuermann (Universidad de Concepcion).

View abstract PDF


 

Posters


Optimal recovery of integral operators and applications

Yuliya Babenko

Kennesaw State University, USA   -   ybabenko@kennesaw.edu

In this talk we present the solution to a problem of recovering a rather arbitrary integral operator based on incomplete information with error. We apply the main result to obtain optimal (asymptotically optimal) methods of recovery and compute the optimal (asymptotically optimal) error for the solutions to certain integral equations as well as boundary and initial value problems for various PDE's. In particular, to illustrate the method, we present results for various boundary value problems for wave, heat, and Poisson's equations. Nevertheless, the developed method is more general and can be applied to other similar problems.

Joint work with Vladyslav Babenko (Oles Honchar Dnipropetrovsk National University, Ukraine), Natalia Parfinovych (Oles Honchar Dnipropetrovsk National University, Ukraine) and Dmytro Skorokhodov (Oles Honchar Dnipropetrovsk National University, Ukraine).

View abstract PDF


The Beurling-Selberg Box Minorant Problem

Jacob Carruth

University of Texas at Austin, United States   -   jcarruth@math.utexas.edu

Let $F$ be a function on $\mathbb{R}^d$ which is bandlimited to the unit cube and which minorizes the characteristic function of the unit cube. How large can the integral of $F$ be? Selberg first asked this question to solve a problem in number theory but it has since found many applications including to signal recovery, crystallography, and sphere packing. We show that for a sufficiently large dimensions $d^*$ the answer to this question is zero. Furthermore, we discuss a numerical method which gives an explicit upper bound on $d^*$.

Joint work with Noam Elkies (Harvard University), Felipe Goncalves (University of Alberta) and Michael Kelly (Institute for Defense Analysis, Princeton, NJ).

View abstract PDF


Sparse polynomial techniques in high dimension for non intrusive treatment of parametric PDEs

moulay abdellah chkifa

University mohamed 6 polytechnic, Morocco   -   bchkifa@gmail.com

Motivated by the development of non-intrusive methods parametric PDE’s in high dimension, we present an overview of various polynomial schemes for approximation in high dimension. The parametric PDEs that we consider are formulated by \[ {\cal D}(u,y)=0, \] where $ {\cal D}$ is a partial differential operator, $u$ is the unknown solution and $y=(y_j)$ is a parameter vector of high dimension $d>>1$ where every $y_j$ ranges in $[-1,1]$, \[ y\in U:=[-1,1]^d. \] If we assume well-posedness in a fixed Banach space $V$ for all $y$, the solution map is given by \[ y\in U \mapsto u(y)\in V. \] We are interested in constructing approximations $u_\Lambda$ to $u$ in polynomial spaces of the form \[ {\mathbb V}_\Lambda = V \otimes {\mathbb P}_\Lambda := \bigg\{ \sum_{\nu\in\Lambda} c_\nu~ y_1^{\nu_1}\dots y_d^{\nu_d}: c_\nu \in V \bigg\}, \quad \quad \Lambda \subset {\mathbb N}^d, \] where $\Lambda$ is downward closed, i.e. \[ \mu \leq \nu\in \Lambda \; \; \Rightarrow \mu \in \Lambda. \] with \[ \mu \leq \nu \quad \equiv \quad \mu_j \leq \nu_j,\;\; j=1,\dots,d. \] To this end, we often use non-intrusive collocation type polynomial schemes which build the approximations $u_\Lambda$ based on queries $u^1=u(y^1),~u^2=u(y^2),\dots\in V$ of the map $u$, at well chosen points $y^1,y^2,\dots \in U$, that are obtained through a black box type solver (e.g. a legacy solver). We provide an overview of some of these methods, namely

Sparse grids.

Sparse polynomial interpolation.

Sparse approximation based on LaVallée Poussin type operators.

Sparse least square.

Compressive sensing via $l_1$-minimization.

The computability, cost, robustness and stability of the various scheme are compared. These criteria are strongly tied to the sampling technique of the parameters $y^j$. In others words, how the locations $y^1,y^2,\dots$ should be placed in the hypercube $U$ while the criteria for the scheme are sustained. For instance, cost saving can be achieved using hierarchical sampling for which the sampling set is progressively enriched together with the polynomial space ${\mathbb V}_\Lambda$ as new basis elements are added (as $\Lambda$ gets enriched with new multi-indices $\nu\in {\mathbb N}^d$). We show how structured hierarchical sampling based on Clenshaw–Curtis rules, or more generally $\Re$-Leja sequences, yields satisfactory results for this purpose.

Joint work with Albert Cohen (Université Pierre et Marie Curie, France), Christoph Schwab (ETH Zurich, Switzerland), Giovanni Migliorati (Université Pierre et Marie Curie, France), Fabio Nobile (école polytechnique fédérale de Lausanne, Switzerland), Raul Tempone (King Abdullah University of Science and Technology, Saudi Arabia ), Clayton Webster (Oak Ridge national laboratory, USA) and Hoang Tran (Oak Ridge national laboratory, USA).

View abstract PDF


Multi-Scale Decomposition of Transformations

Paul Escande

Johns Hopkins University, United States of America   -   paul.escande@gmail.com

In many applications, transformations between two domains are defined through point-wise mappings. These functions can be costly to store and compute, but also hard to interpret in a geometric fashion. In this work, we propose a way to overcome these difficulties. The main idea is a novel multi-scale decomposition of complex transformations into a cascade of elementary, user-specified, transformations.

This methods allows to: (i) Construct efficient approximations for elements of large spaces of complex transformations using simple understandable blocks, (ii) Use transformations to measure similarities between complex objects, (iii) Deal with invariance under certain transformations, (iv) Perform statistical inference tasks on sets of transformations.

In this poster we will describe the method as well as provide theoretical guarantees on the quality of the multi-scale approximations. Then we will present some numerical experiments that show its computational efficiency.

Joint work with Mauro Maggioni (Johns Hopkins University).

View abstract PDF


Frames and numerical approximation

Daan Huybrechs

KU Leuven, Belgium   -   daan.huybrechs@cs.kuleuven.be

A frame is a generalisation of a basis which can be redundant (i.e. linearly dependent). This redundancy provides a wealth of flexibility and enables rapid approximation of classes of functions that cannot be well approximated by commonly used bases. Examples include functions with a singularity, or multivariate functions defined on irregular domains. On the other hand, the redundancy also leads to extremely ill-conditioned linear systems. Yet, it can be shown that the computation of best approximations in a frame is numerically stable, in spite of the ill-conditioning. Furthermore, several frames of practical interest give rise to linear systems with a particular singular value structure - a plunge region - that can be exploited to achieve fast approximation algorithms.

In this poster we demonstrate by example how to approximate functions using several different types of frames. The stability of numerical approximations in frames was recently shown in [1]. The fast algorithms relate to the literature on bandlimited extrapolation in sampling theory and signal processing. For example, fast approximation in the so-called Fourier extension frame is achieved via implicit projection onto discrete prolate spheroidal wave sequences [2]. This allows the efficient approximation of functions defined on very irregular sets, including fractals.

[1] B. Adcock and D. Huybrechs, Frames and numerical approximation, 2016 (https://arxiv.org/abs/1612.04464)

[2] R. Matthysen and D. Huybrechs, Fast Algorithms for the computation of Fourier Extensions of arbitrary length, SIAM J. Sci. Comput. 38(2), pp.899-922, 2016.

Joint work with Ben Adcock (Simon Fraser University), Vincent Coppé (KU Leuven), Roel Matthysen (KU Leuven) and Marcus Webb (KU Leuven).

View abstract PDF


Lebesgue constants for convex polyhedra and polynomial interpolation on Lissajous-Chebyshev nodes

Yurii Kolomoitsev

University of Lübeck, Germany   -   kolomoitsev@math.uni-luebeck.de

Let $W$ be a bounded set in $\mathbb{R}^d$. The following integral is called the Lebesgue constant for $W$ $$ \mathcal{L}(W)=\int_{[0,2\pi)^d}\bigg| \sum_{k\in W\cap \mathbb{Z}^d} e^{i(k,x)}\bigg|{\rm d}x. $$

In the talk, we will present new upper and lower estimates of the Lebesgue constants for convex polyhedra. These estimates we apply to analyze the absolute condition number of multivariate polynomial interpolation on Lissajous-Chebyshev node points. The proof is based on a relation between the Lebesgue constant for the polynomial interpolation problem and the Lebesgue constant $\mathcal{L}(W)$, where $W$ is some specific convex polyhedron.

Joint work with P. Dencker (University of Lübeck, Germany), W. Erb (University of Lübeck, Germany), T. Lomako (Institute of Applied Mathematics and Mechanics of NAS of Ukraine).

View abstract PDF


The Geometrical Description of Feasible Singular Values in the Tensor Train Format

Sebastian Kraemer

IGPM at RWTH Aachen University, Germany   -   kraemer@igpm.rwth-aachen.de

Tensors have grown in importance and are applied to an increasing number of fields. Crucial in this regard are tensor formats, such as the widespread Tensor Train (TT) decomposition, which represent low rank tensors. This multivariate TT-rank and accordant $d-1$ tuples of singular values are based on different matricizations of the same $d$-dimensional tensor. While the behavior of these singular values is as essential as in the matrix case $(d=2)$, here the question about the $\textit{feasibility}$ of specific TT-singular values arises: for which prescribed tuples exist correspondent tensors and how is the topology of this set of feasible values? $\\$ This work is largely based on a connection that we establish to eigenvalues of sums of hermitian matrices. After extensive work spanning several centuries, that problem, known for the Horn Conjecture, was basically resolved by Knutson and Tao through the concept of so called $\textit{honeycombs}$. We transfer and expand their and earlier results on that topic and thereby find that the sets of squared, feasible TT-singular values are geometrically described by polyhedral cones, resolving our problem setting to the largest extend. Besides necessary inequalities, we also present a linear programming algorithm to check feasibility as well as a simple heuristic, but quite reliable, parallel algorithm to construct tensors with prescribed, feasible singular values.

View abstract PDF


Dictionary measurement selection for state estimation with reduced models

Olga Mula

Paris Dauphine, France   -   mula@ceremade.dauphine.fr

Parametric PDEs of the general form $$ \mathcal{P}(u,a)=0 $$ are commonly used to describe many physical processes, where $\mathcal{P}$ is a differential operator, $a$ is a high-dimensional vector of parameters and $u$ is the unknown solution belonging to some Hilbert space $V$.

A typical scenario in state estimation is the following: for an unknown parameter $a$, one observes $m$ independent linear measurements of $u(a)$ of the form $\ell_i(u)= (w_i,u) $, $i=1,\dots,m$, where $l_i \in V'$ and $w_i$ are the Riesz representers, and we write $W_m = \text{span}\{w_1,\dots,w_m\}$. The goal is to recover an approximation $u^*$ of $u$ from the measurements.

Due to the dependence on $a$ the solutions of the PDE lie in a manifold and the particular PDE structure often allows to derive good approximations of it by linear spaces $V_n$ of moderate dimension $n$. In this setting, the observed measurements and $V_n$ can be combined to produce an approximation $u^*$ of $u$ up to accuracy $$ \Vert u -u^*\Vert \leq \beta^{-1}(V_n,W_m) \, \text{dist}(u,V_n) %\text{dist}(u,V_n), $$ where $$ \beta(V_n,W_m) := \inf_{v\in V_n} \frac{\Vert P_{W_m}v\Vert}{\Vert v \Vert} $$ plays the role of a stability constant. For a given $V_n$, one relevant objective is to guarantee that $\beta(V_n,W_m)\geq \gamma >0$ with a number of measurements $m\geq n$ as small as possible. We present results in this direction when the measurement functionals $\ell_i$ belong to a complete dictionary.

Joint work with Peter Binev (University of South Carolina, USA), Albert Cohen (Pierre et Marie Curie University, France) and James Nichols (Pierre et Marie Curie University, France).

View abstract PDF


Higher Order Total Variation, Multiscale Generalizations, and Applications to Inverse Problems

Toby Sanders

Arizona State University, United States   -   tlsander@asu.edu

In the realm of signal and image denoising and reconstruction, L1 regularization techniques have generated a great deal of attention with a multitude of variants. In this work, we are interested in higher order total variation (HOTV) approaches, which are motivated by the idea of encouraging low order polynomial behavior in the reconstruction. A key component for their success is that under certain assumptions, the solution of minimum L1 norm is a good approximation to the solution of minimum L0 norm. In this work, we demonstrate that this approximation can result in artifacts that are inconsistent with desired sparsity promoting L0 properties, resulting in subpar results in some instances. Therefore we have developed a multiscale higher order total variation (MHOTV) approach, which we show is closely related to the use of multiscale Daubechies wavelets. In the development of MHOTV, we confront a number of computational issues, and show how they can be circumvented in a mathematically elegant way, via operator decomposition and alternatively converting the problem into Fourier space. The relationship with wavelets, which we believe has generally gone unrecognized, is shown to hold in several numerical results, although subtle improvements in the results can be seen due to the properties of MHOTV. The results are shown to be useful in a number of computationally challenging practical applications, including image inpainting, synthetic aperture radar, and 3D tomography.

View abstract PDF


Subdivision and spline spaces

Tatyana Sorokina

Towson University, USA   -   tsorokina@towson.edu

A standard construction in approximation theory is mesh refinement. For a simplicial or polyhedral mesh $\Delta \subseteq R^k$, we study the subdivision $\Delta'$ obtained by subdividing a maximal cell of $\Delta$. We give sufficient conditions for the module of splines on $\Delta'$ to split as the direct sum of splines on $\Delta$ and splines on the subdivided cell. As a consequence, we obtain dimension formulas and explicit bases for several commonly used subdivisions and their multivariate generalizations.

Joint work with Hal Schenck (University of Illinois Urbana-Champaign, USA).

View abstract PDF



FoCM 2017, based on a nodethirtythree design.