Abstract
We obtain rates of contraction of posterior distributions in inverse problems defined by scales of smoothness classes. We derive abstract results for general priors, with contraction rates determined by Galerkin approximation. The rate depends on the amount of prior concentration near the true function and the prior mass of functions with inferior Galerkin approximation. We apply the general result to non-conjugate series priors, showing that these priors give near optimal and adaptive recovery in some generality, Gaussian priors, and mixtures of Gaussian priors, where the latter are also shown to be near optimal and adaptive. The proofs are based on general testing and approximation arguments, without explicit calculations on the posterior distribution. We are thus not restricted to priors based on the singular value decomposition of the operator. We illustrate the results with examples of inverse problems resulting from differential equations.
Original language | English |
---|---|
Pages (from-to) | 2081-2107 |
Number of pages | 27 |
Journal | Annales de l'Institut Henri Poincaré, Probabilités et Statistiques |
Volume | 56 |
Issue number | 3 |
DOIs | |
Publication status | Published - Aug 2020 |
Keywords
- Adaptive estimation
- Gaussian prior
- Hilbert scale
- Linear inverse problem
- Nonparametric Bayesian estimation
- Posterior contraction rate
- Random series prior
- Regularity scale
- White noise