Conference abstracts

Session A1 - Approximation Theory

July 12, 14:30 ~ 15:20 - Room B3

Computing a Quantity of Interest from Observational Data

Simon Foucart

Texas A&M University, USA   -   foucart@tamu.edu

Scientific problems often feature observational data received in the form $w_1=l_1(f), \ldots$, $w_m=l_m(f)$ of known linear functionals applied to an unknown function $f$ from some Banach space~$\mathcal{X}$, and it is required to either approximate $f$ (the full approximation problem) or to estimate a quantity of interest $Q(f)$. In typical examples, the quantities of interest can be the maximum/minimum of $f$ or some averaged quantity such as the integral of $f$, while the observational data consists of point evaluations. To obtain meaningful results about such problems, it is necessary to possess additional information about $f$, usually as an assumption that $f$ belongs to a certain model class $\mathcal{K}$ contained in $\mathcal{X}$. This is precisely the framework of optimal recovery, which produced substantial investigations when the model class is a ball of a smoothness space, e.g. when it is a Lipschitz, Sobolev, or Besov class. This presentation is concerned with other model classes described by approximation processes. Its main contributions are: (i) for the estimation of quantities of interest, the production of numerically implementable algorithms which are optimal over these model classes, (ii) for the full approximation problem, the construction of linear algorithms which are optimal or near optimal over these model classes in case of data consisting of point evaluations. Regarding (i), when $Q$ is a linear functional, the existence of linear optimal algorithms was established by Smolyak, but the proof was not numerically constructive. In classical recovery settings, it is shown here that such linear optimal algorithms can be produced by constrained minimization methods, and examples involving the computations of integrals from the given data are examined in greater details. Regarding (ii), it is shown that linearization of optimal algorithms can be achieved for the full approximation problem, too, in the important situation where the $l_j$ are point evaluations and $\mathcal{X}$ is a space of continuous functions equipped with the uniform norm. It is also revealed how the quasi-interpolation theory allows for the construction of linear algorithms which are near optimal.

Joint work with Ron DeVore (Texas A&M University), Guergana Petrova (Texas A&M University) and Przemyslaw Wojtaszczyk (University of Warsaw).

View abstract PDF



FoCM 2017, based on a nodethirtythree design.