This repository hosts the code to reproduce all results presented in the report On Functional Priors and Cold Posteriors in Bayesian Neural Networks.
We reproduce the cold posterior effect on ResNet20+CIFAR10 with data augmentation turned on. Observe that training accuracy, training confidence, and test accuracy are all closely correlated.
This raises the question: could we obtain a a confident model at
We demonstrate that correctly sampling from a prior distribution over model outputs requires a ``change of variables'' term that has not been previously discussed. Using a novel approximation for this term, we are able to control the confidence of a BNN, closely matching the expected confidence of a Dirichlet prior.
The core directory contains all the required code for sampling models. The run.py script provides a simple interface for sampling a single model.
All figures in the report were generated using the provided Jupyter notebooks.
The experiments directory contains two Python scripts: one for reproducing the cold posterior experiment (Figure 2.2), one for reproducing the Dirichlet prior experiment (Figure 5.1). While these two scripts are capable of fully reproducing our results, they are meant to serve mostly as pseudocode: they're very readable but you might find it necessary to add some experiment-management code to run multiple jobs in parallel, monitor them, etc. Since reproducing these experiments would take thousands of TPU-core hours, we also provide download links for model weights and training logs for the cold posterior experiment (25 GB) and the Dirichlet prior experiment (1.9 GB).