Conference abstracts

Session A7 - Stochastic Computation

July 11, 14:30 ~ 14:55 - Room B2

Beyond Well-Tempered Metadynamics algorithms for sampling multimodal target densities

Gersende Fort

CNRS, IMT (Univ. Toulouse), France   -   gersende.fort@math.univ-toulouse.fr

In many situations, sampling methods are considered in order to compute expectations of given observables with respect to a distribution $\pi \, \mathrm{d} \lambda$ on $\mathsf{X} \subseteq \mathbb{R}^D$, when $\pi$ is highly multimodal. Free-energy based adaptive importance sampling techniques have been developed in the physics and chemistry literature to efficiently sample from such a target distribution: the auxiliary distribution $\pi_\star \, \mathrm{d} \lambda$ from which the samples are drawn, is defined, given a partition $\{\mathsf{X}_i, i \leq d \}$ of $\mathsf{X}$, as a local biasing of the target $\pi$ such that each element $\mathsf{X}_i$ has the same weight under $\pi_\star \, \mathrm{d} \lambda$. These methods are casted in the class of adaptive Markov chain Monte Carlo (MCMC) samplers since the local biasing is unknown: it is therefore learnt on the fly and the importance function evolves along the run of the sampler. As usual with importance sampling, expectations with respect to $\pi$ are obtained from a weighted mean of the samples returned by the sampler.

Examples of such approaches are Wang-Landau algorithms, the Self-Healing Umbrella Sampling, adaptive biasing forces methods, the metadynamic algorithm or the well-tempered metadynamics algorithm. Nevertheless, the main drawback of mots of these methods is that two antagonistic phenomena are in competition: on one hand, to overcome the multimodality issue, the sampler is forced to visit all the strata $\{\mathsf{X}_i, i \leq d \}$ equally; on the other hand, the algorithm spends the same time in strata with high and low weight under $\pi \, \mathrm{d} \lambda$ which makes the Monte Carlo approximation of expectations under $\pi \mathrm{d} \lambda$ quite inefficient.

We present a new algorithm, which generalizes all the examples mentioned above: this novel algorithm is designed to reduce the two antagonistic effects. We will show that the estimation of the local bias can be seen as a Stochastic Approximation algorithm with random step-size sequence; and the sampler as an adaptive MCMC method. We will analyze its asymptotic behavior and discuss numerically the role of some design parameters.

Joint work with Benjamin Jourdain (ENPC, France), Tony Lelièvre (ENPC, France) and Gabriel Stöltz (ENPC, France).

View abstract PDF



FoCM 2017, based on a nodethirtythree design.