Détail de l'auteur
Auteur Pierre Chainais |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Asymptotically exact data augmentation : models and Monte Carlo sampling with applications to Bayesian inference / Maxime Vono (2020)
Titre : Asymptotically exact data augmentation : models and Monte Carlo sampling with applications to Bayesian inference Type de document : Thèse/HDR Auteurs : Maxime Vono, Auteur ; Nicolas Dobigeon, Directeur de thèse ; Pierre Chainais, Auteur Editeur : Toulouse : Université de Toulouse Année de publication : 2020 Importance : 200 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse en vue de l'obtention du Doctorat de l'Université de Toulouse, Signal, Image, Acoustique et OptimisationLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement du signal
[Termes IGN] échantillonnage
[Termes IGN] échantillonnage de Gibbs
[Termes IGN] estimation bayesienne
[Termes IGN] méthode de Monte-Carlo
[Termes IGN] méthode de Monte-Carlo par chaînes de Markov
[Termes IGN] optimisation (mathématiques)
[Termes IGN] processus gaussien
[Termes IGN] régression linéaireIndex. décimale : THESE Thèses et HDR Résumé : (auteur) Numerous machine learning and signal/image processing tasks can be formulated as statistical inference problems. As an archetypal example, recommendation systems rely on the completion of partially observed user/item matrix, which can be conducted via the joint estimation of latent factors and activation coefficients. More formally, the object to be inferred is usually defined as the solution of a variational or stochastic optimization problem. In particular, within a Bayesian framework, this solution is defined as the minimizer of a cost function, referred to as the posterior loss. In the simple case when this function is chosen as quadratic, the Bayesian estimator is known to be the posterior mean which minimizes the mean square error and defined as an integral according to the posterior distribution. In most real-world applicative contexts, computing such integrals is not straightforward. One alternative lies in making use of Monte Carlo integration, which consists in approximating any expectation according to the posterior distribution by an empirical average involving samples from the posterior. This so-called Monte Carlo integration requires the availability of efficient algorithmic schemes able to generate samples from a desired posterior distribution. A huge literature dedicated to random variable generation has proposed various Monte Carlo algorithms. For instance, Markov chain Monte Carlo (MCMC) methods, whose particular instances are the famous Gibbs sampler and Metropolis-Hastings algorithm, define a wide class of algorithms which allow a Markov chain to be generated with the desired stationary distribution. Despite their seemingly simplicity and genericity, conventional MCMC algorithms may be computationally inefficient for large-scale, distributed and/or highly structured problems. The main objective of this thesis consists in introducing new models and related MCMC approaches to alleviate these issues. The intractability of the posterior distribution is tackled by proposing a class of approximate but asymptotically exact augmented (AXDA) models. Then, two Gibbs samplers targetting approximate posterior distributions based on the AXDA framework, are proposed and their benefits are illustrated on challenging signal processing, image processing and machine learning problems. A detailed theoretical study of the convergence rates associated to one of these two Gibbs samplers is also conducted and reveals explicit dependences with respect to the dimension, condition number of the negative log-posterior and prescribed precision. In this work, we also pay attention to the feasibility of the sampling steps involved in the proposed Gibbs samplers. Since one of this step requires to sample from a possibly high-dimensional Gaussian distribution, we review and unify existing approaches by introducing a framework which stands for the stochastic counterpart of the celebrated proximal point algorithm. This strong connection between simulation and optimization is not isolated in this thesis. Indeed, we also show that the derived Gibbs samplers share tight links with quadratic penalty methods and that the AXDA framework yields a class of envelope functions related to the Moreau one. Note de contenu : Introduction
1- Asymptotically exact data augmentation
2- Monte Carlo sampling from AXDA
3- 3A non-asymptotic convergence analysis of the Split Gibbs sampler
4- High-dimensional Gaussian sampling: A unifying approach based on a stochastic proximal point algorithm
5- Back to optimization: The tempered AXDA envelope
ConclusionNuméro de notice : 28575 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse française Note de thèse : thèse de Doctorat : Signal, Image, Acoustique et Optimisation : Toulouse : 2020 Organisme de stage : Institut de Recherche en Informatique de Toulouse En ligne : https://tel.archives-ouvertes.fr/tel-03143936/document Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97833