### Workshop C5 - Information-Based Complexity

**Organizers:** Tino Ullrich (Universität Bonn, Germany) - Frances Kuo (University of New South Wales, Australia) - Erich Novak (Universität Jena, Germany)

## Talks

July 17, 14:30 ~ 15:20 - Room B7

## Optimal algorithms for numerical integration: a survey

### Mario Ullrich

### Johannes Kepler University Linz, Austria - mario.ullrich@jku.at

I will outline some recent (and several classical) results on the complexity of numerical integration using cubature rules with specific point sets. The point sets under consideration will be (admissible) lattices and digital nets, as they are the only presently known constructions that lead to optimal algorithms for the spaces considered here.

The aim of the talk is to present the state of the art in this area of research, including results in the deterministic and randomized setting, results on tractability, and numerical experiments. Additionally, we provide pointers to other relevant research areas and related open problems.

July 17, 15:30 ~ 15:55 - Room B7

## Optimality and Lower Bounds in Approximation Theory

### Jan Vybiral

### Charles University Prague, Czech Republic - vybiral@karlin.mff.cuni.cz

We discuss optimality of approximation algorithms and lower bounds on their complexity using Carl's inequality, which states that for any two Banach spaces $X$ and $Y$ and any $\alpha>0$ there exists a constant $\gamma_\alpha>0$ such that \[ \sup_{1\le k\le n}k^{\alpha}e_k(T)\le \gamma_\alpha \sup_{1\le k\le n} k^\alpha c_k(T) \] holds for all linear and bounded operators $T:X\to Y$. Here $e_k(T)$ is the $k$-th entropy number of $T$ and $c_k(T)$ is the $k$-th Gelfand number of $T$. This inequality has been used to obtain lower bounds on Gelfand numbers, which in turn are a useful tool in the study of optimality of algorithms.

We extend the Carl's inequality to quasi-Banach spaces, which allows to extend this approach also to the area of sparse recovery, where $\ell_p^N$ spaces play an important role also for $p<1.$ Further, we give estimates of entropy numbers of embeddings of Schatten classes of matrices. We close by an application of these results to the optimality of algorithms of low-rank matrix recovery and matrix completion.

Joint work with A. Hinrichs (U Linz, Austria), A. Kolleck (TU Berlin, Germany) and J. Prochno (U Hull, UK).

July 17, 16:00 ~ 16:25 - Room B7

## General Cost Bounds for Randomised Smolyak Algorithm

### Marcin Wnuk

### Kiel University, Germany - wnuk@math.uni-kiel.de

We are considering randomized sparse grid (Smolyak) algorithms $A(q,d)$ on $d-$dimensional tensor product spaces $F_d = \bigotimes_{j = 1}^{d} F^{(j)}$, where $F^{(j)}, j = 1, \ldots, d,$ are separable Hilbert spaces of functions. We measure the quality of algorithm meant to approximate a given linear tensor product operator $S_d:F_d \rightarrow G_d$ ( $G_d = \bigotimes_{j = 1}^{d} G^{(j)}$, where $G^{(j)}, j = 1, \ldots, d,$ are separable Hilbert spaces) via the randomized error: $$e^{ran}(A(q,d)) = ( \sup\limits_{\substack{f \in F_d, \\ \lVert f \rVert_{F_d} \leq 1}}\mathbb{E}\lVert (S_d-A(q,d))f \rVert^2_{G_d})^{\frac{1}{2}} $$ and the worst case error: $$e^{wce}(A(q,d)) = ( \mathbb{E}\sup\limits_{\substack{f \in F_d, \\ \lVert f \rVert_{F_d} \leq 1}}\lVert (S_d-A(q,d))f \rVert^2_{G_d})^{\frac{1}{2}}. $$ We prove upper bound on the approximation error of $A(q,d)$ with explicit dependence on the number of variables and the number $N$ of information used.

As an application we rigorously prove a claim from [1] that the results obtained there hold also for randomized algorithms. Moreover, our findings prove a substantial generalization of the main results from the aforementioned paper.

Under additional conditions on $F_d$ our upper bounds may be to some extent improved upon. We demonstrate this for a multivariate integration problem on a family of Haar wavelet spaces. On those spaces we perform a rigorous comparison of pure QMC quadratures based on scrambled $(t,m,s)-$ nets and the Smolyak algorithm.

References

1. L.Plaskota, G. Wasilkowski, Tractability of infinite-dimensional integration in the worst case and randomized settings, Journal of Complexity 27 (2011), 505-518.

Joint work with Michael Gnewuch (Kiel University, Germany).

July 17, 17:00 ~ 17:25 - Room B7

## The Marcinkiewicz-type discretization theorems for the hyperbolic cross polynomials

### Vladimir Temlyakov

### University of South Carolina and Steklov Institute of Mathematics, USA - temlyakovv@gmail.com

The talk is devoted to discretization of integral norms of functions from a given finite dimensional subspace -- the hyperbolic cross polynomials. This problem is important in applications but there is no systematic study of it. We present here a new technique, which works well for discretization of the integral norm. It is a combination of probabilistic technique, based on chaining, with results on the entropy numbers in the uniform norm.

July 17, 17:30 ~ 17:55 - Room B7

## Truncation dimension for linear problems on multivariate function spaces

### Peter Kritzer

### RICAM, Austrian Academy of Sciences, Austria - peter.kritzer@oeaw.ac.at

We discuss linear problems on weighted anchored and ANOVA spaces of high-dimensional functions. The main questions addressed are: When is it possible to approximate a continuous linear operator applied to multivariate functions with all but the first $k$ variables set to zero, so that the corresponding error is small? What is the truncation dimension, i.e., the smallest number $k=k(\varepsilon)$ such that the corresponding error is bounded by a given error demand $\varepsilon$? As it turns out, $k(\varepsilon)$ could be very small for sufficiently fast decaying weights.

Joint work with Aicke Hinrichs (JKU Linz, Austria), Friedrich Pillichshammer (JKU Linz, Austria) and Greg W. Wasilkowski (University of Kentucky, USA).

July 17, 18:00 ~ 18:25 - Room B7

## Boundedness and approximate recovery of pseudo-differential operators on $m-$dimensional torus

### Daurenbek Bazarkhanov

### Insitute of Mathematics & Math Modeling, Kazakhstan - dauren.mirza@gmail.com

In the talk we will discuss several recent results on $L_p-L_q-$ boundedness of pseudo-differential operators on $m-$dimensional torus with symbols from H$\ddot{o}$rmander's classes or rough symbols. Furthermore, we will consider a problem of approximate recovery of those pseudo-differential operators on some function classes

July 17, 18:30 ~ 18:55 - Room B7

## Multivariate decomposition Galerkin finite element method for elliptic PDEs with random diffusion coefficients

### Dong Nguyen

### KU Leuven, Belgium - dong.nguyen@cs.kuleuven.be

We introduce \textit{multivariate decomposition Galerkin finite element method} for solving elliptic PDEs with random diffusion coefficients. The proposed method combines multivariate decomposition method to compute infinite dimensional integration with Galerkin finite element method. Based on a priori error bounds, parameters of multivariate decomposition Galerkin finite element method are chosen in order to achieve a prescribe accuracy under minimal computational work.

Joint work with Dirk Nuyens (KU Leuven, Belgium).

July 18, 14:30 ~ 14:55 - Room B7

## On the generation of random fields

### Ian Sloan

### University of New South Wales, Sydney, Australia - i.sloan@unsw.edu.au

The generation of Gaussian random fields is a challenging problem in computational mathematics, especially when the correlation length is short and the field is rough. The traditional approach is to make use of a truncated Karhunen-Loeve (KL) expansion, but the generation of even a single realisation of the field may then be effectively beyond reach (especially for 3-dimensional domains) if the need is to obtain an expected $L_2$ error of say $5\%$, because of the potentially very slow convergence of the KL expansion. In this talk a completely different approach is used, in which the field is initially generated at a regular grid on a rectangle that contains the physical domain, and then possibly interpolated to obtain the field at other points. In that case there is no need for any truncation, rather the main problem becomes the factorisation of a large dense matrix. For this we use circulant embedding and FFT ideas.

We note that the use of IBC ideas to study the random field generation problem, for example to find the standard information cost of obtaining an expected $L_2$ error, has as far as we know not been studied. It might be interesting to do so, but with the reservation that in the grid-based approach, as above, the main computational issues concern not the information cost but rather the factorisation of large matrices.

Joint work with Ivan Graham (University of Bath), Frances Kuo (University of New South Wales), Dirk Nuyens (KU Leuven) and Robert Scheichl (University of Bath).

July 18, 15:00 ~ 15:25 - Room B7

## On arbitrarily slow convergence rates for strong numerical approximations of Cox-Ingersoll-Ross processes and squared Bessel processes

### Mario Hefter

### TU Kaiserslautern, Germany - hefter@mathematik.uni-kl.de

Cox-Ingersoll-Ross (CIR) processes are extensively used in state-of-the-art models for the approximative pricing of financial derivatives. In particular, CIR processes are day after day employed to model instantaneous variances (squared volatilities) of foreign exchange rates and stock prices in Heston-type models and they are also intensively used to model short-rate interest rates. The prices of the financial derivatives in the above mentioned models are very often approximately computed by means of explicit or implicit Euler- or Milstein-type discretization methods based on equidistant evaluations of the driving noise processes. In this article we study the strong convergence speeds of all such discretization methods. More specifically, the main result of this article reveals that each such discretization method achieves at most a strong convergence order of $\delta/2$, where $0<\delta<2$ is the dimension of the squared Bessel process associated to the considered CIR process. In particular, we thereby reveal that discretization methods currently employed in the financial industry may converge with arbitrarily slow strong convergence rates to the solution of the considered CIR process. This article thus discovers the alarming situation that discretization methods currently employed in the financial engineering industry are thus not capable to solve CIR processes in the strong sense in a reasonable computational time. We thereby lay open the need of the development of other more sophisticated approximation methods which are capable to solve CIR processes in the strong sense in a reasonable computational time and which thus can not belong to the class of algorithms which use equidistant evaluations of the driving noise processes.

Joint work with Arnulf Jentzen (Zurich, Switzerland).

July 18, 15:30 ~ 16:20 - Room B7

## On sub-polynomial lower error bounds for strong approximation of SDEs

### Larisa Yaroslavtseva

### University of Passau, Germany - larisa.yaroslavtseva@uni-passau.de

We consider the problem of strong approximation of the solution of a stochastic differential equation (SDE) at the final time based on finitely many evaluations of the driving Brownian motion $W$. While the majority of results for this problem deals with equations that have globally Lipschitz continuous coefficients, such assumptions are typically not met for real world applications. In recent years a number of positive results for this problem has been established under substantially weaker assumptions on the coefficients such as global monotonicity conditions: new types of algorithms have been constructed that are easy to implement and still achieve a polynomial rate of convergence.

In our talk we present negative results for this problem. First we show that there exist SDEs with bounded smooth coefficients such that their solutions can not be approximated by means of any kind of adaptive method with a polynomial rate of convergence. Even worse, we show that for any sequence $(a_n)_{n \in \mathbb N}\subset (0, \infty)$, which may converge to zero arbitrarily slowly, there exists an SDE with bounded smooth coefficients such that no approximation method based on $n$ adaptively chosen evaluations of $W$ on average can achieve a smaller absolute mean error than the given number $a_n$.

While the diffusion coefficients of these pathological SDEs are globally Lipschitz continuous, the first order partial derivatives of the drift coefficients are, essentially, of exponential growth. In the second part of the talk we show that sub-polynomial rates of convergence may happen even when the first order partial derivatives of the coefficients have at most polynomial growth, which is one of the typical assumptions in the literature on numerical approximation of SDEs with globally monotone coefficients.

Joint work with Arnulf Jentzen (ETH Zürich, Switzerland) and Thomas Müller-Gronbach (University of Passau, Germany).

July 18, 17:00 ~ 17:25 - Room B7

## Approximation of anisotropic periodic Sobolev functions

### Thomas Kühn

### Universität Leipzig, Germany - kuehn@math.uni-leipzig.de

I will report about recent progress on approximation of periodic functions belonging to anisotropic Sobolev spaces on the $d$-torus, for any dimension $d$. The error will be measured in the $L_2$-norm and expressed by approximation numbers of embedding operators.

In particular, I will present new asymptotic and preasymptotic estimates, with special emphasis on the $d$-dependence of the hidden constants. Then I will give an interpretation of these results in the language of IBC, in terms of tractability.

Joint work with Winfried Sickel (Friedrich-Schiller-Universität Jena, Germany) and Tino Ullrich (Hausdorff Center for Mathematics, Bonn, Germany).

July 18, 17:30 ~ 17:55 - Room B7

## An Upper Bound of the Minimal Dispersion via Delta Covers

### Daniel Rudolf

### Georg-August-Universität Göttingen , Germany - daniel.rudolf@uni-goettingen.de

For a point set of $n$ elements in the $d$-dimensional unit cube and a class of test sets we are interested in the largest volume of a test set which does not contain any point. For all natural numbers $n$, $d$ and under the assumption of a $\delta$-cover with cardinality $\vert \Gamma_\delta \vert$ we prove that there is a point set, such that the largest volume of such a test set without any point is bounded by $\frac{\log \vert \Gamma_\delta \vert}{n} + \delta$. For axis-parallel boxes on the unit cube this leads to a volume of at most $\frac{4d}{n}\log(\frac{9n}{d})$ and on the torus to $\frac{4d}{n}\log (2n)$.

July 18, 18:00 ~ 18:25 - Room B7

## The Complexity of High and Infinite Dimensional Integration

### Aicke Hinrichs

### Johannes Kepler University Linz, Austria - aicke.hinrichs@jku.at

We present recent results on the complexity of integration. We focus on high and infinite dimensional settings. In particular, we explain how embedding theorems between certain scales of function spaces can be used to transfer complexity results between different settings. This approach is equally useful for approximation problems.

Joint work with Michael Gnewuch, Mario Hefter, Klaus Ritter, Greg Wasilkowski.

July 18, 18:30 ~ 18:55 - Room B7

## Frolov's cubature formula in low dimensions

### Christopher Kacwin

### University of Bonn, Germany - kacwin@ins.uni-bonn.de

Frolov's cubature formula has returned to academic focus recently, because it achieves optimal convergence rates for many function classes, including Sobolev and Besov functions with dominating mixed smoothness. It can be stated as follows: for a given integration domain $\Omega$, choose an admissible lattice $\Gamma$ and set \[\Phi_\Gamma(f) = \det\Gamma\sum_{x\in\Gamma\cap\Omega}f(x)\approx \int_\Omega f(x)dx \,.\] This allows us a certain degree of freedom in the choice of the lattice $\Gamma$, and gives rise to the question as to which lattices give optimal numerical results. In this talk, we present a construction method for admissible lattices and provide a list for dimensions $d=2,...,10$ of what we believe are nearly optimal choices. We back up our findings with numerical results, comparing Frolov's cubature formula for different admissible lattices as well as with other numerical integration methods.

Joint work with Jens Oettershagen (University of Bonn, Germany), Mario Ullrich (Johannes-Kepler University Linz, Austria) and Tino Ullrich (University of Bonn, Germany).

July 19, 14:30 ~ 14:55 - Room B7

## On the complexity of computing the $L_q$ norm

### Stefan Heinrich

### University of Kaiserslautern, Germany - heinrich@informatik.uni-kl.de

We study the complexity of computing \[ S(f) =\Bigg(\int_{Q} f(x)^q dx \Bigg)^{1/q}, \] that is, the $L_q$ norm of a function. We assume that $f$ is from the unit ball of the Sobolev space $W_p^r(Q)$, where $r\in\mathbb{N}$, $1\le p\le \infty$, $1\le q< \infty$, $r/d>1/p-1/q$, $Q=[0,1]^d$. Information is standard, that is, consists of function values.

A general result of G. W. Wasilkowski [Some nonlinear problems are as easy as the approximation problem, Comp. & Maths. with Appls. 10 (1984), 351-363] states that in the deterministic setting the $n$-th minimal errors $e_n^{\rm det}(S)$ are of the same order as $e_n^{\rm det}(J)$, where $J$ is the embedding $J:W_p^r(Q)\to L_q(Q)$, thus we can derive the order of $e_n^{\rm det}(S)$ directly from known results on approximation. In the randomized setting such a general result no longer holds.

We present and analyze a randomized algorithm for computing $S(f)$ and provide lower bounds, which allow to determine the order of the randomized $n$-th minimal errors $e_n^{\rm ran} (S)$. We also provide comparisons to $e_n^{\rm det}(S)$ as well as to $e_n^{\rm ran}(J)$ and discuss some extensions.

July 19, 15:00 ~ 15:25 - Room B7

## Recovery from binary measurements using $\ell_1$-SVMs

### Anton Kolleck

### Technische Universität Berlin, Germany - 004930 - 31427379

In this talk we discuss the recovery of sparse signals $x\in\mathbb{R}^d$ from quantized linear measurements in the most extreme case, i.e., we want to recover $x$ from $y=\mathrm{sign}(Ax)$.

We follow the idea of support vector machines and show that we only need about $s \log(d)$ measurements to recover an $s$-sparse signal $x$ from $y$, even in the case when the measurement process is disturbed by random bitflips.

Joint work with Jan Vybiral (Charles University, Czech Republic).

July 19, 15:30 ~ 15:55 - Room B7

## On multivariate integration with respect to the Gaussian measure

### Christian Irrgeher

### RICAM, Austrian Academy of Sciences, Austria - christian.irrgeher@ricam.oeaw.ac.at

We consider the Gaussian integration problem, i.e., we want to approximate multivariate integrals with respect to the Gaussian measure in an efficient way. For that, we study the problem in terms of convergence rate and dependence on the dimension. The efficient approximation of such integrals is of great interest in many applications where Gaussian models are used (e.g. finance, uncertainty quantification,...).

In this talk we give an overview of the state of research and present current results regarding the Gaussian integration problem.

Joint work with Josef Dick (UNSW Sydney, Australia), Gunther Leobacher (KFU Graz, Austria) and Friedrich Pillichshammer (JKU Linz, Austria).

July 19, 16:00 ~ 16:25 - Room B7

## Monte Carlo Methods for the $L_{\infty}$-Approximation on Sobolev Spaces with Bounded Mixed Derivatives

### Van Kien Nguyen

### Friedrich-Schiller-University Jena, Germany - kien.nguyen@uni-jena.de

In this talk we give tight lower and upper bounds for the order of convergence of linear Monte Carlo approximation of functions from periodic Sobolev spaces with bounded mixed derivatives in the norm of $L_{\infty}$. Especially, in the case of two dimensions we obtain the sharp estimates. In addition, we shall compare this result with the already known behaviour of worst-case approximation errors of linear deterministic methods.

Joint work with Glenn Byrenheid (Hausdorff-Center for Mathematics, Bonn, Germany) and Robert J. Kunsch (Friedrich-Schiller-University Jena, Germany).

July 19, 17:00 ~ 17:25 - Room B7

## Higher-order cubature and the MDM for integrals on $\mathbb{R}^\mathbb{N}$

### Dirk Nuyens

### KU Leuven, Belgium - dirk.nuyens@cs.kuleuven.be

We study the application of higher-order cubature rules to approximate an infinite-variate integral over $\mathbb{R}^\mathbb{N}$. The error consists of a dimension-truncation error and the cubature rule error. We look for conditions in the form of weights to achieve a polynomially decaying error bound.

Joint work with Dong T.P. Nguyen (KU Leuven, Belgium).

July 19, 17:30 ~ 17:55 - Room B7

## Fast Estimation of Bounds for Global Sensitivity Indices and Applications to Seismic Hazard Analysis

### Hernan Leovey

### AXPO AG , Switzerland - hleovey@yahoo.com

Global sensitivity indices are usually the method of choice for parameter identification in model calibration. Nevertheless, when the parameter dimensionality is huge, direct estimation of all Sobol' sensitivity indices for ranking becomes intractable, as is usually the case in many applications in Geo-sciences. We show a method that considers Poincaré-type inequalities combined with techniques of algorithmic differentiation to obtain a fast estimation of upper bounds on Sobol' sensitivity indices, when the input parameters of the model are assumed to have independent distributions. We discuss complexity of the estimation method, applications in seismic hazard analysis, and motivation for construction and error analysis of multi-objective quadrature rules. This talk is based in a recent publication in the Bulletin of the Seismological Society of America (DOI: 10.1785/0120160185).

Joint work with Christian Molkenthin (Potsdam University, Germany), Frank Scherbaum (Potsdam University, Germany), Andreas Griewank (Yachaytech, Ecuador), Sergei Kucherenko (Imperial College London, UK) and Fabrice Cotton (Potsdam University, Germany).

July 19, 18:00 ~ 18:25 - Room B7

## Minimal asymptotic errors for global approximation of jump-diffusion SDEs

### Pawel Przybylowicz

### AGH University of Science and Technology, Poland - pprzybyl@agh.edu.pl

We study strong global approximation of stochastic differential equations (SDEs) of the following form $$ \left\{ \begin{array}{ll} dX(t)=a(t,X(t))dt+b(t,X(t))dW(t)+c(t,X(t-))dN(t), &t\in [0,T], \\ X(0)=x_0, \end{array}\right. $$ where $x_0\in\mathbf{R}$, $a,b,c:[0,T]\times\mathbf{R}\to\mathbf{R}$ satisfy certain regularity conditions, $W=\{W(t)\}_{t\in [0,T]}$ is a one-dimensional Wiener process and $N=\{N(t)\}_{t\in [0,T]}$ is a non-homogeneous Poisson process with an intensity function $\lambda=\lambda(t)>0$. We assume that the jump and diffusion coefficients of the underlying SDE satisfy jump commutativity condition.

We present the exact convergence rate of the minimal errors that can be achieved by arbitrary algorithms based on a finite number of observations of the Wiener and Poisson process. We consider two classes of methods, that use equidistant and nonequidistant sampling for $W$ and $N$. We define optimal schemes, that are based on the classical Milstein scheme, which asymptotically attain established minimal errors. We discuss method based on an adaptive stepsize control. It turns out that methods based on nonequidistant mesh are more efficient than those based on the equidistant sampling.

Joint work with Andrzej Kaluza (AGH University of Science and Technology, Poland, akaluza@agh.edu.pl).

July 19, 18:30 ~ 18:55 - Room B7

## Computing the non-computable - On fundamental barriers in computational mathematics

### Anders Hansen

### University of Cambridge, United Kingdom - ach70@cam.ac.uk

In the beginning of the 20th century Hilbert asked abut the existence of algorithms for decision problems, which sparked the birth of modern theoretical computer science. Similarly, in the 1980s, Smale asked about the existence of algorithms for problems in scientific computing, such as polynomial root finding. This led to a comprehensive program on the foundations of computational mathematics. Questions a la Hilbert and Smale, regarding the existence of algorithms, can be asked in any area of computational mathematics. As we show in this talk, possibly surprisingly, the answer is often negative in many contemporary fields such as compressed sensing, statistical estimation, image processing, machine learning, deep neural networks, computer aided proofs, computational quantum mechanics etc. The non-existence of algorithms for many of these problems may seem like a great paradox given that they are used successfully in so many applications. In this talk we will discuss how these paradoxes yield a very rich classification theory in the foundations of computational mathematics. This is done through the Solvability Complexity Index (SCI) hierarchy. By using the SCI hierarchy one can show how many core problems are formally non-computable yet explain why they are computed efficiently on a daily basis. Moreover, this allows for a complexity theory for the many non-computable problems in the applications mentioned above. In particular, the new results show that theories such as Information Based Complexity (IBC) can be applied in a much wider context than traditionally used.

Joint work with Alex Bastounis (University of Cambridge), Matt Colbrooke (University of Cambridge) and Verner Vlacic (University of Cambridge).