Bayesian Statistics

Objectives

The course aims at providing an overview of Bayesian parametric and nonparametric statistics. Students will learn how to model statistical and machine learning problems from a Bayesian perspective and study theoretical properties of the models.

Syllabus 

This course is in two parts covering fundamentals of Bayesian parametric and nonparametric inference, respectively. It focuses on the key probabilistic concepts and stochastic modelling tools at the basis of the most recent advances in the field.

Part 1
  • Foundations of Bayesian inference: exchangeability, de Finetti's representation theorem
  • Conjugacy in simple models (binomial, Poisson, Gaussian)
  • Some elements of posterior sampling, Markov chain Monte Carlo
  • Bayesian neural networks and their Gaussian process limit
Part 2
  • Clustering and Dirichlet process, random partitions
  • Models beyond the Dirichlet process, random measures, Indian buffet process 
  • Some elements of Bayesian asymptotics

Lecture notes

  • Lecture notes (handwritten) on the Bayesian nonparametric part, essentially based on the following bibliography.

Bibliography

  • Hoff, P. D. (2009). A first course in Bayesian statistical methods. Springer Science & Business Media.
  • Neal, R. M. (2012). Bayesian learning for neural networks (Vol. 118). Springer Science & Business Media.
  • Hjort, N. L., Holmes, C., Müller, P., & Walker, S. G. (2010). Bayesian nonparametrics. Cambridge series in statistical and probabilistic mathematics. Cambridge: Cambridge Univ. Press. 
  • Orbanz, P. (2012). Lecture Notes on Bayesian Nonparametrics. Available at: http://stat.columbia.edu/~porbanz/papers/porbanz_BNP_draft.pdf
  • Kleijn, B., van der Vaart, A., & van Zanten, H. (2012). Lectures on Nonparametric Bayesian Statistics. Available at: https://staff.fnwi.uva.nl/b.j.k.kleijn/NPBayes-LecNotes-2015.pdf

Further links