top of page
Chapter 3
Here we take a look at the the structure of probability distributions, which allows us to identify the information encoded by distributions and the stable characteristics of a given distribution. This in turn tells us how to assign sampling and likelihood distributions to describe data, as well as how to assign prior distributions that have desired behavior when combined with likelihood distributions( e.g., using Jeffreys rule, which yields a 'maximally uninformed' prior relative to a specific parameter of the likelihood). We see that combining different priors to a likelihood function will affect the character of the resulting posterior, as well as how this changes with new data.
Programming Asides:
-
computing the mean of a distribution [p122]
-
computing the variance of a probability distribution [p123]
-
uniform stem plot [p126]
-
binomial stem plots [p128]
-
poisson stem plots [p130]
-
gaussian distribution plots [p137]
-
pseudo-random number generation [p140]
-
uninformed and informed uniform priors over gaussian parameters [p144]
-
the squaring transform for conversion between variance and sd [p146]
-
scale parameters [p147]
-
reparameterized scale parameters [p149]
-
exponential posteriors derived from jeffreys priors [p160]
-
gaussian posteriors derived from jeffreys priors [p161]
-
conjugate and jeffreys priors for the poisson parameter [p165]
-
priors for the binomial likelihood [p168]
-
posterior distributions for the gaussian mean [p171]
-
posterior distributions for the gaussian mean and dispersion [p172]
-
sampling and posterior distributions for the cauchy [p176]
bottom of page