model selection
Chapter 6
There are two main analyses performed in a scientific context: measurement and model selection.
Model selection allows us to compare and select the best from among a set of competing hypotheses based on their probabilities, allowing for a direct comparison of hypotheses that does not suffer from the weaknesses of classical null hypothesis testing. This chapter begins with a discussion of the 'Occam penalty' that develops naturally within Bayesian model selection computations, favoring simpler over complex models. We then develop a model selection algorithm that allows models to be compared in a wide range of circumstances. The utility of this algorithm is demonstrated in a series of worked examples. Finally, the effectiveness of classical frequentist null hypothesis testing is compared directly to that of Bayesian model selection in several basic examples to highlight some of the disadvantages of the classical system relative to more modern techniques.
Programming Asides:

basic leverpressing example [p381]

evidence in the balanced case [p389]

create penalty plot [p392]

evidence calculation [p393]

penalized ratio of gaussian likelihoods [p398]

evidence based on a truncated range of possible rate parameters [p403]

calculation using the incomplete beta function [p404]

difference of binomial rates [p408]

vernier bias [p414]

vernier setting precision [p418]

evidence for signal equality [p423]

zeroslope model evidence [p431]

evidence for zeroslope model [p435]

multisource bias [p441]

multisource dispersion [p443]

multisource slope comparisons [p449]

multisource slope comparisons [p451]

evidence for the causal mechanism [p459]

evidence for the causal mechanism [p473]

numerical assessment of the evidence [p482]

frequentist algorithm versus 2model comparison [p493]

frequentist algorithm versus 3model comparison [p495]

measurement as model comparison [p499]