Academics

Sampling Strategies in Sparse Bayesian Inference

Time:Wednesday, 16:00-17:00 Oct. 23, 2024

Venue:A04, the 8th Floor Shuangqing Complex Building

Organizer:Chenglong Bao

Speaker:Yiqiu Dong

Mathematics and AI for Imaging Seminars II

Organizer:

Chenglong Bao

Speaker:

Yiqiu Dong

Time:

Wednesday, 16:00-17:00

Oct. 23, 2024

Venue:

A04, the 8th Floor

Shuangqing Complex Building

双清综合楼8楼A04

Title:

Sampling Strategies in Sparse Bayesian Inference

Abstract:

Regularization is a common tool in variational inverse problems to impose assumptions on the parameters of the problem. One such assumption is sparsity, which is commonly promoted using lasso and total variation-like regularization. Although the solutions to many such regularized inverse problems can be considered as points of maximum probability of well-chosen posterior distributions, samples from these distributions are generally not sparse. In this talk, we present a sampling strategy for an implicitly defined probability distribution that combines the effects of sparsity imposing regularization with Gaussian distributions. It extends the randomize-then-optimize (RTO) method to sampling from implicitly described continuous probability distributions. We study the properties of these regularized distributions, and compare the proposed method with Langevin-based methods, which are often used for sampling high-dimensional densities.

DATEOctober 22, 2024
SHARE
Related News
    • 0

      Design of gradient flows for sampling probability distributions

      AbstractSampling a target distribution with an unknown normalization constant is a fundamental problem in computational science and engineering. Often, dynamical systems of probability densities are constructed to address this task, including MCMC and sequential Monte Carlo. Recently, gradient flows in the space of probability measures have been increasingly popular to generate this dynamical s...

    • 1

      Improved Bounds for Sampling Solutions of Random CNF Formulas

      AbstractLet Φ be a random k-CNF formula on n variables and m clauses, where each clause is a disjunction of k literals chosen independently and uniformly. Our goal is, for most Φ, to (approximately) uniformly sample from its solution space.Let α=m/n be the density. The previous best algorithm runs in time n^poly(k,α) for any α≲2^(k/300) [Galanis, Goldberg, Guo, and Yang, SIAM J. Comput.'2...