next up previous contents
Next: Bayesian testing Up: Model Assessment and Model Previous: Significance Testing, Goodness-of-fit Tests   Contents


Bayesian methods

Kashyap (1977) [209] Bayesian posterior probabilities for a set of time series models. Normality assumptions so that get closed-form results. Comments on comparison to hypothesis testing approach. Comments and a small simulation comparison (including AIC) in Stoica (1979) [353], with response in Kashyap (1979) [210] (see, in particular, last paragraph of that).

Rissanen (1978) [316] Estimating order and parameters of a model based on the principle of `shortest data description'. For a normal linear process, leads to a criterion which is very close to BIC. [I do not pretend to understand this one.]

Kashyap (1982) [211] Choice of time series models. Decision rules which minimizes averages probability of error in finite series is choosing model with maximum posterior probability. Taylor series approximation for this, simplifies for Gaussian ARMA models.

Racine et al. (1986) [301] Examples of the use of Bayesian methods in applications in the pharmaceutical industry (with a long discussion). Some examples of Bayes factors and model averaging.

Poskitt (1987) [297] Considers model selection as a Bayesian decision problem, where utility function is related to Kullback-Leibler information. Asymptotic approximation of the posterior expected utility leads to a penalised criterion which does not depend on the prior for the parameters. Notes on model complexity, and linear model as an example.

Mitchell and Beauchamp (1988) [274] Bayesian approach to subset selection in linear models. Achieved by specifying a `spike-and-slap' (mass at zero) prior for each regression coefficient. Prior and results depend crucially on the relative height of the spike; a Bayesian cross-validation approach for estimating this through overall prediction error.

Pettit and Young (1990) [293] Considers influence of individual observations on BF by computing difference between log-BF with the observation included and excluded. Various examples (normal models, linear, log-linear, testing distribution).

Pettit (1992) [294] BF for testing whether a given observation is an outlier. Priors derived using the device of imaginary observations.

Kass (1993) [212] An overview of Bayes factors. Discusses computation of BFs, BIC, choice of priors and especially sensitivity of BF to the priors.

Forster (1995) [140] In a book review, comments on the use of Bayesianism in the philosophy of science. Related to issues in Bayesian model choice.

Kass and Raftery (1995) [213] Review article and bibliography. Calculating Bayes factors (including BIC and simulation methods). Choice of priors; model averaging; Bayes factors vs. frequentist hypothesis testing.

Raftery and Richardson (1995) [310] Bayesian model selection for GLMs for an epidmiological audience. Choice of priors, Occam's window, approximations for Bf as in Raftery (1994) [306]. In initial analysis, choosing transformations and thresholds using the `Alternating conditional expectations' algorithm. Computations using the GLIB S-plus functions.

Kass and Wasserman (1996) [216] Review of reference (noninformative) priors in Bayesian analysis. Brief discussion of Bayes factors, Jeffrey's choice of priors for them, and BIC.

Le et al. (1996) [237] Selection of the order of an AR process in the presence of outliers. Robust Bayes factors (Laplace approximations), compared to AIC and BIC. Model averaging for predictions.

Bandyopadhayay et al. (1996) [25] A paper from a philosophy of science journal. Discussion of Bayesian model selection in `curve fitting'. Normal polynomial regression with conjugate priors as an example. Tries to derive a formula for p(M given D) under improper priors, but gets the algebra wrong. A feeble discussion of predictive accuracy vs. parsimony. Not very impressive at all.

Albert (1997) [14] Goodness-of-fit testing of two-way contingency tables using BFs. Choice of proper priors for the log-linear terms restricted by the smaller model. Much emphasis on model elaboration using outlier models. Notes on model averaging.

Key et al. (1998) [223] Review of Bayesian model choice. List of contents only, paper itself distributed at the 1998 Bayesian conference [have not got this].

[The following section lists references to approximations of Bayes factors (including BIF). Section 3.1 lists articles which emphasise the `significance testing' aspects of BFs, for example comparisons to real significance tests and the so-called Lindley's paradox. Section 3.2 contains articles where the emphasis is on the priors leading to particular approximations. References in Subsection 3.3 discuss, in particular, real or imaginary `training samples' which are used to obtain the prior. ]



Subsections
next up previous contents
Next: Bayesian testing Up: Model Assessment and Model Previous: Significance Testing, Goodness-of-fit Tests   Contents
Jouni Kuha 2003-07-16