Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2024-12-26T01:44:55.147Z Has data issue: false hasContentIssue false

References

Published online by Cambridge University Press:  18 February 2019

M. Antónia Amaral Turkman
Affiliation:
Universidade de Lisboa
Carlos Daniel Paulino
Affiliation:
Universidade de Lisboa
Peter Müller
Affiliation:
University of Texas, Austin
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Chapter
Information
Computational Bayesian Statistics
An Introduction
, pp. 232 - 240
Publisher: Cambridge University Press
Print publication year: 2019

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Amaral Turkman, M. A. 1980. Applications of predictive distributions. PhD thesis, University of Sheffield. (Cited on page 13.)Google Scholar
Andrieu, C., Doucet, A., and Robert, C. P. 2004. Computational advances for and from Bayesian analysis. Statistical Science, 19(1), 118127. (Cited on page 129.)Google Scholar
Basu, D., and Pereira, C. A. B. 1982. On the Bayesian analysis of categorical data: the problem of nonresponse. Journal of Statistical Planning and Inference, 6(4), 345362. (Cited on page 40.)Google Scholar
Belitz, C., Brezger, A., Kneib, T., Lang, S., and Umlauf, N. 2013. BayesX: Software for Bayesian Inference in Structured Additive Regression Models. Version 2.1. (Cited on page 172.)Google Scholar
Berger, J. O. 1984. The robust Bayesian viewpoint (with discussion). Pages 63–144 of: Kadane, J. B. (ed.), Robustness of Bayesian Analyses. North-Holland. (Cited on page 1.)Google Scholar
Bernardo, J., and Smith, A. F. M. 2000. Bayesian Theory. Wiley. (Cited on pages 26 and 88.)Google Scholar
Best, N., Cowles, M., and Vines, S. 1995. CODA Manual Version 0.30. (Cited on pages 116 and 201.)Google Scholar
Bhattacharya, A., Pati, D., Pillai, N. S., and Dunson, D. B. 2015. Dirichlet–Laplace priors for optimal shrinkage. Journal of the American Statistical Association, 110(512), 14791490. (Cited on page 134.)Google Scholar
Blangiardo, M., and Cameletti, M. 2015. Spatial and Spatio-Temporal Bayesian Models with R-INLA. Wiley. (Cited on pages 150 and 172.)Google Scholar
Blangiardo, M., Cameletti, M., Baio, G., and Rue, H. 2013. Spatial and spatio-temporal models with r-inla. Spatial and Spatio-Temporal Epidemiology, 4(Supplement C), 3349. (Cited on page 163.)Google Scholar
Blei, D. M., and Jordan, M. I. 2006. Variational inference for Dirichlet process mixtures. Bayesian Analysis, 1(1), 121143. (Cited on page 168.)Google Scholar
Blei, D. M., Kucukelbir, A., and McAuliffe, J. D. 2017. Variational inference: a review for statisticians. Journal of the American Statistical Association, 112(518), 859877. (Cited on pages 164, 165, and 171.)Google Scholar
Box, G. 1980. Sampling and Bayes inference in scientific modelling and robustness. Journal of the Royal Statistical Society, A, 143, 383430. (Cited on page 70.)Google Scholar
Box, G. 1983. An apology for ecumenism in statistics. Pages 51–84 of: Box, G., Wu, Leonard, T. C.-F. (eds.), Scientific Inference, Data Analysis, and Robustness. Academic Press. (Cited on page 70.)Google Scholar
Brezger, A., Kneib, T., and Lang, S. 2005. BayesX: analyzing Bayesian structural additive regression models. Journal of Statistical Software, Articles, 14(11), 122. (Cited on pages 172, 192, and 193.)Google Scholar
Burnham, K. P., and Anderson, D. R. 2002. Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach. 2nd edn. Springer.Google Scholar
Carlin, B. P., and Chib, S. 1995. Bayesian model choice via Markov chain Monte Carlo. Journal of the Royal Statistical Society, B, 57(3), 473484. (Cited on page 136.)Google Scholar
Carlin, B. P., and Gelfand, A. E. 1991. An iterative Monte Carlo method for non-conjugate Bayesian analysis. Statistics and Computing, 1(2), 119128. (Cited on page 65.)Google Scholar
Carlin, B. P., and Louis, T. A. 2009. Bayesian Methods for Data Analysis. CRC Press. (Cited on pages viii and 78.)Google Scholar
Carpenter, B., Gelman, A., Hoffman, M., et al. 2017. Stan: a probabilistic programming language. Journal of Statistical Software, Articles, 76(1), 132. (Cited on pages 172 and 186.)Google Scholar
Carvalho, C. M., Polson, N. G., and Scott, J. G. 2010. The horseshoe estimator for sparse signals. Biometrika, 97(2), 465480. (Cited on page 134.)CrossRefGoogle Scholar
Celeux, G., Forbes, F., Robert, C. P., and Titterington, D. M. 2006. Deviance information criteria for missing data models. Bayesian Analysis, 1(4), 651673. (Cited on page 79.)Google Scholar
Chen, M.-H. 1994. Importance-weighted marginal Bayesian posterior density estimation. Journal of the American Statistical Association, 89, 818824. (Cited on page 58.)Google Scholar
Chen, M.-H., and Shao, Q. 1999. Monte Carlo estimation of Bayesian credible and HPD intervals. Journal of Computational and Graphical Statistics, 8, 6992. (Cited on page 47.)Google Scholar
Chen, M.-H., Shao, Q., and Ibrahim, J. G. 2000. Monte Carlo Methods in Bayesian Computation. Springer. (Cited on pages 54, 57, and 129.)Google Scholar
Chib, S. 1995. Marginal likelihood from the Gibbs output. Journal of the American Statistical Association, 90(432), 13131321. (Cited on pages 129 and 130.)Google Scholar
Chib, S., and Jeliazkov, I. 2001. Marginal likelihood from the Metropolis–Hastings output. Journal of the American Statistical Association, 96(453), 270281. (Cited on pages 129 and 131.)Google Scholar
Christensen, R., Johnson, W., Hanson, T., and Branscum, A. 2011. Bayesian Ideas and Data Analysis: An Introduction for Scientists and Statisticians. CRC Press. (Cited on page viii.)Google Scholar
Cowles, M. K. 1994. Practical issues in Gibbs sampler implementation with application to Bayesian hierarchical modelling of clinical trial data. PhD thesis, University of Minnesota. (Cited on page 201.)Google Scholar
Cowles, M. K., and Carlin, B. P. 1996. Markov chain Monte Carlo convergence diagnostics: a comparative review. Journal of the American Statistical Association, 91, 883904. (Cited on pages 116 and 199.)Google Scholar
Damien, P., Wakefield, J., and Walker, S. 1999. Gibbs sampling for Bayesian non-conjugate and hierarchical models by using auxiliary variables. Journal of the Royal Statistical Society, B, 61 (2), 331344. (Cited on page 106.)Google Scholar
Dawid, A. P. 1985. The impossibility of inductive inference. (invited discussion of ’Self-calibrating priors do not exist’, by Oakes, D..). Journal of the American Statistical Association, 80, 340341. (Cited on page 14.)Google Scholar
Dellaportas, P., and Papageorgiou, I. 2006. Multivariate mixtures of normals with unknown number of components. Statistics and Computing, 16(1), 5768. (Cited on page 148.)Google Scholar
Dellaportas, P., Forster, J. J., and Ntzoufras, I. 2002. On Bayesian model and variable selection using MCMC. Statistics and Computing, 12(1), 2736. (Cited on pages 132, 137, and 138.)Google Scholar
Dempster, A. P., Laird, N. M., and Rubin, D. B. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, B, 39(1), 138. (Cited on page 144.)Google Scholar
de Valpine, P., Turek, D., Paciorek, C. J., et al. 2017. Programming with models: Writing statistical algorithms for general model structures with NIMBLE. Journal of Computational and Graphical Statistics, 26(2), 403413. (Cited on page 172.)Google Scholar
Devroye, L. 1986. Non-Uniform Random Variate Generation. Springer. (Cited on page 43.)Google Scholar
Doucet, A., and Lee, A. 2018. Sequential Monte Carlo methods. Pages 165–190 of: Drton, M., Lauritzen, S. L., Maathuis, M., and Wainwright, M. (eds.), Handbook of Graphical Models. CRC. (Cited on page 60.)Google Scholar
Doucet, A., Freitas, N. D., and Gordon, N. 2001. Sequential Monte Carlo Methods in Practice. Springer. (Cited on page 60.)Google Scholar
Fahrmeir, L., and Tutz, G. 2001. Multivariate Statistical Modeling Based on Generalized Linear Models. Springer. (Cited on page 161.)Google Scholar
Gelfand, A. E. 1996. Model determination using sampling-based methods. Pages 145– 161 of: Gilks, W. R., Richardson, S., and Spiegelhalter, D. J. (eds.), Markov Chain Monte Carlo in Practice. Chapman & Hall. (Cited on pages 70, 73, 75, and 85.)Google Scholar
Gelfand, A. E., and Dey, D. K. 1994. Bayesian model choice: asymptotics and exact calculations. Journal of the Royal Statistical Society, B, 56, 501514. (Cited on page 87.)Google Scholar
Gelfand, A. E., and Smith, A. F. M. 1990. Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association, 85, 398409. (Cited on pages 49, 57, 58, and 174.)Google Scholar
Gelfand, A. E., Hills, S., Racine-Poon, A., and Smith, A. F. M. 1990. Illustration of Bayesian inference in normal data models using Gibbs sampling. Journal of the American Statistical Association, 85(412), 972985. (Cited on page 166.)Google Scholar
Gelfand, A. E., Smith, A. F. M., and Lee, T. 1992. Bayesian analysis of constrained parameter and truncated data problems using Gibbs sampling. Journal of the American Statistical Association, 87, 523531.Google Scholar
Gelman, A., and Hill, J. 2006. Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press. (Cited on page 185.)Google Scholar
Gelman, A., and Meng, X. L. 1996. Model checking and model improvement. Pages 189–202 of: Gilks, W. R., Richardson, S., and Spiegelhalter, D. J. (eds.), Markov Chain Monte Carlo in Practice. Chapman & Hall. (Cited on page 73.)Google Scholar
Gelman, A., and Rubin, D. B. 1992. Inference from iterative simulation using multiple sequences. Statistical Science, 7, 457–72. (Cited on page 199.)Google Scholar
Gelman, A., Carlin, J. B., Stern, H. S., et al. 2014a. Bayesian Data Analysis. Vol. 3. Chapman and &/CRC Press. (Cited on page viii.)Google Scholar
Gelman, A., Hwang, J., and Vehtari, A. 2014b. Understanding predictive information criterion for Bayesian models. Statistics and Computing, 24, 9971016.Google Scholar
Geman, S., and Geman, D. 1984. Stochastic relaxation, Gibbs distribution and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6, 721741. (Cited on pages 90 and 98.)Google Scholar
Gentle, J. E. 2004. Random Number Generation and Monte Carlo Methods. 2nd edn. Springer. (Cited on page 43.)Google Scholar
Genz, A., and Kass, R. E. 1997. Subregion adaptative integration of functions having a dominant peak. Journal of Computational and Graphical Statistics, 6, 92111. (Cited on page 53.)Google Scholar
George, E. I., and McCulloch, R. 1997. Approaches for Bayesian variable selection. Statistica Sinica, 7, 339373. (Cited on pages 131, 132, and 133.)Google Scholar
George, E. I., and McCulloch, R. E. 1993. Variable selection via Gibbs sampling. Journal of the American Statistical Association, 88(423), 881889. (Cited on pages 131 and 132.)Google Scholar
Geweke, J. 1989. Bayesian inference in econometric models using Monte Carlo integration. Econometrica, 57(02), 1317–39. (Cited on pages 52 and 68.)Google Scholar
Geweke, J. 1992. Evaluating the accuracy of sampling-based approaches to calculating posterior moments. In: Bayesian Statistics 4. Clarendon Press. (Cited on page 199.)Google Scholar
Geweke, J. 2004. Getting it right. Journal of the American Statistical Association, 99(467), 799–804. (Cited on pages 127 and 128.)Google Scholar
Geyer, C. J. 1992. Practical Markov chain Monte Carlo (with discussion). Statistical Science, 7, 473511. (Cited on page 115.)Google Scholar
Gillies, D. 2001. Bayesianism and the fixity of the theoretical framework. Pages 363– 379 of: Corfield, J., and Williamson, J. (eds.), Foundations of Bayesianism. Kluwer Academic Publishers. (Cited on page 12.)Google Scholar
Givens, G. H., and Hoeting, J. A. 2005. Computational Statistics. Wiley. (Cited on page 97.)Google Scholar
Gradshteyn, I., and Ryzhik, I. 2007. Table of Integrals, Series, and Products, Jeffrey, A., and Zwillinger, D. (eds.). Academic Press.Google Scholar
Green, P. J. 1995. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82, 711732. (Cited on page 138.)Google Scholar
Hastings, W. K. 1970. Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57, 97109. (Cited on page 90.)Google Scholar
Heidelberger, P., and Welch, P. 1983. Simulation run length control in the presence of an initial transient. Operations Research, 31, 11091144. (Cited on page 199.)Google Scholar
Henderson, H. V., and Velleman, P. F. 1981. Building multiple regression models interactively. Biometrics, 37, 391411. (Cited on page 73.)Google Scholar
Hoff, P. D. 2009. A First Course in Bayesian Statistical Methods. Springer. (Cited on page viii.)Google Scholar
Hoffman, M. D., and Gelman, A. 2014. The No-U-Turn sampler: Adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15(1), 15931623. (Cited on page 186.)Google Scholar
Jaynes, E. T. 1968. Prior probabilities. IEEE Transactions on Systems, Science and Cybernetics, 4, 227291. (Cited on page 22.)Google Scholar
Jaynes, E. T. 2003. Probability Theory: The Logic of Science. Cambridge University Press. (Cited on pages 13 and 21.)CrossRefGoogle Scholar
Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., and Saul, L. K. 1999. An introduction to variational methods for graphical models. Machine Learning, 37(2), 183233. (Cited on page 164.)Google Scholar
Karabatsos, G. 2015. A menu-driven software package for Bayesian regression analysis. The ISBA Bulletin, 22, 1316. (Cited on page 172.)Google Scholar
Kass, R. E., and Raftery, A. E. 1995. Bayes factors. Journal of the American Statistical Association, 90, 773795. (Cited on page 85.)Google Scholar
Kass, R. E., and Wasserman, L. 1996. The selection of prior distributions by formal rules. Journal of the American Statistical Association, 91, 13431370. (Cited on page 17.)Google Scholar
Kempthorn, O., and Folks, L. 1971. Probability, Statistics and Data Analysis. Iowa State University Press. (Cited on page 7.)Google Scholar
Kneib, T., Heinzl, F., Brezger, A., Bove, D., and Klein, N. 2014. BayesX: R Utilities Accompanying the Software Package BayesX. R package version 0.2–9. (Cited on page 193.)Google Scholar
Korner-Nievergelt, F., von Felten, S., Roth, T., et al. 2015. Bayesian Data Analysis in Ecology Using Linear Models with R, BUGS, and Stan. Academic Press. (Cited on page 172.)Google Scholar
Kruschke, J. 2011. Doing Bayesian Data Analysis: A Tutorial with R and BUGS. Academic Press/Elsevier. (Cited on page 172.)Google Scholar
Kruschke, J. 2014. Doing Bayesian Data Analysis: A Tutorial with R, JAGS and Stan. Academic Press/Elsevier. (Cited on page 172.)Google Scholar
Kucukelbir, A., Tran, D., Ranganath, R., Gelman, A., and Blei, D. M. 2017. Automatic differentiation variational inference. Journal of Machine Learning Research, 18(1), 430474. (Cited on page 168.)Google Scholar
Kuhn, T. S. 1962. The Structure of Scientific Revolutions. University of Chicago Press. (Cited on page 5.)Google Scholar
Kuo, L., and Mallick, B. 1998. Variable selection for regression models. Sankhya: The Indian Journal of Statistics, Series B, 60(1), 6581. (Cited on page 132.)Google Scholar
Lauritzen, S. L., and Spiegelhalter, D. J. 1988. Local computations with probabilities on graphical structures and their application to expert systems. Journal of the Royal Statistical Society, B, 50(2), 157224. (Cited on page 174.)Google Scholar
Lin, D. 2013. Online learning of nonparametric mixture models via sequential variational approximation. Pages 395–403 of: Proceedings of the 26th International Conference on Neural Information Processing Systems. USA: Curran Associates Inc. (Cited on page 168.)Google Scholar
Lindgren, F., and Rue, H. 2015. Bayesian spatial modelling with R-INLA. Journal of Statistical Software, Articles, 63(19), 125. (Cited on page 214.)Google Scholar
Lindgren, F., Rue, H., and Lindstrom, J. 2011. An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approach. Journal of the Royal Statistical Society, B, 73 (4), 423498. (Cited on page 214.)Google Scholar
Lindley, D. V. 1990. The 1988 Wald memorial lectures: the present position in Bayesian statistics. Statistical Science, 5, 4489. (Cited on page 10.)Google Scholar
Liu, J., and West, M. 2001. Combined parameter and state estimation in simulation-based filtering. Pages 197–223 of: Doucet, A., de Freitas, N., and Gordon, N. (eds.), Sequential Monte Carlo Methods in Practice. Springer. (Cited on page 64.)Google Scholar
Lunn, D., Spiegelhalter, D., Thomas, A., and Best, N. 2009. The BUGS project: Evolution, critique and future directions. Statistics in Medicine, 28(25), 30493067. (Cited on page 174.)Google Scholar
MacEachern, S., and Berliner, L. 1994. Subsampling the Gibbs sampler. The American Statistician, 48, 188190.Google Scholar
Madigan, D., and York, J. 1995. Bayesian graphical models for discrete data. International Statistical Review, 63, 215232. (Cited on page 133.)Google Scholar
Marin, J.-M., Pudlo, P., Robert, C. P., and Ryder, R. J. 2012. Approximate Bayesian computational methods. Statistics and Computing, 22(6), 11671180. (Cited on page 126.)Google Scholar
Mayo, D., and Kruse, M. 2001. Principles of inference and their consequences. Pages 381–403 of: Corfield, J., and Williamson, J. (eds.), Foundations of Bayesianism. Kluwer Academic Publishers. (Cited on page 9.)Google Scholar
Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., and Teller, E. 1953. Equation of state calculations by fast computing machines. J. Chem. Phys, 21, 10871092. (Cited on pages 90 and 97.)Google Scholar
Morris, J. S., Baggerly, K. A., and Coombes, K. R. 2003. Bayesian shrinkage estimation of the relative abundance of mRNA transcripts using SAGE. Biometrics, 59, 476– 486. (Cited on page 122.)Google Scholar
Neal, R. M. 1997. Markov Chain Monte Carlo methods based on “slicing” the density function. Technical Report. University of Toronto. (Cited on page 106.)Google Scholar
Neal, R. M. 2003. Slice sampling (with discussion). Annals of Statistics, 31, 705767. (Cited on page 106.)Google Scholar
Neal, R. M. 2011. MCMC using Hamiltonian dynamics. Chap. 5 of: Brooks, S., Gelman, A., Jones, G., and Meng, X.-L. (eds.), Handbook of Markov Chain Monte Carlo. Chapman & Hall / CRC Press. (Cited on pages 107 and 185.)Google Scholar
Neuenschwander, B., Branson, M., and Gsponer, T. 2008. Critical aspects of the Bayesian approach to phase I cancer trials. Statistics in Medicine, 27, 24202439. (Cited on page 53.)Google Scholar
Newton, M. A., and Raftery, A. E. 1994. Approximate Bayesian inference by the weighted likelihood bootstrap (with discussion). Journal of the Royal Statistical Society, B, 56, 148. (Cited on page 86.)Google Scholar
Ntzoufras, I. 2009. Bayesian Modeling Using WinBUGS. (Cited on page 172.)Google Scholar
O’Hagan, A. 2010. Bayesian Inference, Vol. 2B. 3rd edn. Arnold. (Cited on pages 1, 9, 14, and 17.)Google Scholar
O’Quigley, J., Pepe, M., and Fisher, L. 1990. Continual reassessment method: A practical design for phase 1 clinical trials in cancer. Biometrics, 46(1), 3348. (Cited on page 45.)Google Scholar
Park, T., and Casella, G. 2008. The bayesian lasso. Journal of the American Statistical Association, 103(482), 681686. (Cited on page 134.)Google Scholar
Patil, V. H. 1964. The Behrens–Fisher problem and its Bayesian solution. Journal of the Indian Statistical Association, 2, 21. (Cited on page 33.)Google Scholar
Paulino, C. D., and Singer, J. M. 2006. Análise de Dados Categorizados. Editora Edgard Blücher.Google Scholar
Paulino, C. D., Soares, P., and Neuhaus, J. 2003. Binomial regression with misclassification. Biometrics, 59, 670675. (Cited on page 17.)Google Scholar
Paulino, C. D., Amaral Turkman, M. A., Murteira, B., and Silva, G. 2018. Estatística Bayesiana. 2nd edn. Fundacão Calouste Gulbenkian. (Cited on pages 17, 38, 54, 57, 84, 93, 158, and 199.)Google Scholar
Pitt, M. K., and Shephard, N. 1999. Filtering via simulation: Auxiliary particle filters. Journal of the American Statistical Association, 94(446), 590599. (Cited on pages 61, 62, and 63.)CrossRefGoogle Scholar
Plummer, M. 2003. JAGS: a program for analysis of Bayesian graphical models using Gibbs sampling. In: Hornik, K., Leisch, F., and Zeileis, A. (eds.), 3rd International Workshop on Distributed Statistical Computing (DSC 2003). (Cited on page 172.)Google Scholar
Plummer, M. 2012. JAGS Version 3.3.0 User Manual. http://mcmc-jags.sourceforge.net, accessed on July 22, 2018. (Cited on page 181.)Google Scholar
Plummer, M., Best, N. G., Cowles, M. K., and Vines, S. K. 2006. CODA: Convergence diagnostics and output analysis for MCMC. R News, 6 (1), 711. (Cited on pages 116, 198, and 201.)Google Scholar
Polson, N. G., Stroud, J. R., and Müller, P. 2008. Practical filtering with sequential parameter learning. Journal of the Royal Statistical Society, B, 70(2), 413428. (Cited on page 64.)Google Scholar
Prado, R., and West, M. 2010. Time Series: Modeling, Computation, and Inference. Chapman & Hall/CRC Press. (Cited on page 64.)Google Scholar
Raftery, A. L., and Lewis, S. 1992. How many iterations in the Gibbs sampler? Pages 763–74 of: Bernardo, Berger, J., Dawid, J., Smith, A., A. (eds.), Bayesian Statistics IV. Oxford University Press. (Cited on page 199.)Google Scholar
Raftery, A. E., Madigan, D., and Hoeting, J. A. 1997. Bayesian model averaging for linear regression models. Journal of the American Statistical Association, 92(437), 179191. (Cited on page 133.)Google Scholar
Richardson, S., and Green, P. J. 1997. On Bayesian analysis of mixtures with an unknown number of components (with discussion). Journal of the Royal Statistical Society, B, 59(4), 731792. (Cited on page 141.)CrossRefGoogle Scholar
Rickert, J. 2018. A first look at NIMBLE. Blog: https://rviews.rstudio.com/2018/07/05/a-first-look-at-nimble/, accessed on July 16, 2018. (Cited on page 172.)Google Scholar
Ripley, B. D. 1987. Stochastic Simulation. Wiley. (Cited on pages 43 and 44.)Google Scholar
Robert, C. P. 1994. The Bayesian Choice. Springer. (Cited on pages 27 and 157.)Google Scholar
Robert, C. R., and Casella, G. 2004. Monte Carlo Statistical Methods. 2nd edn. New York: Springer. (Cited on pages 44 and 96.)Google Scholar
Rosner, B. 1999. Fundamentals of Biostatistics. Duxbury. (Cited on page 222.)Google Scholar
Ross, S. M. 2014. Introduction to Probability Models, 11th ed. Academic Press. (Cited on page 91.)Google Scholar
Rossi, P. E., Allenby, G. M., and McCulloch, R. 2005. Bayesian Statistics and Marketing. Wiley. (Cited on page 172.)Google Scholar
Ročková, V., and George, E. I. 2014. EMVS: The EM approach to Bayesian variable selection. Journal of the American Statistical Association, 109(506), 828846. (Cited on page 143.)Google Scholar
Rubinstein, R. Y. 1981. Simulation and the Monte Carlo Method. 1st edn. Wiley. (Cited on page 44.)Google Scholar
Rue, H., and Held, L. 2005. Gaussian Markov Random Fields: Theory and Applications. Chapman & Hall. (Cited on page 159.)Google Scholar
Rue, H., Martino, S., and Chopin, N. 2009. Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations. Journal of the Royal Statistical Society, B, 71 (2), 319392. (Cited on pages 150, 162, 163, 169, 214, and 217.)Google Scholar
Schofield, M. R., Barker, R. J., Gelman, A., Cook, E. R., and Briffa, K. 2016. A model-based approach to climate reconstruction using tree-ring data. Journal of the American Statistical Association, 2016, 93106. (Cited on page 185.)Google Scholar
Schwarz, G. 1978. Estimating the dimension of a model. Annals of Statistics, 6, 461– 466. (Cited on pages 77 and 83.)Google Scholar
Scott, S., Blocker, A., Bonassi, F., et al. 2016. Bayes and big data: the consensus Monte Carlo algorithm. International Journal of Management Science and Engineering Management, 11(2), 7888. (Cited on page 68.)Google Scholar
Shaw, J. E. H. 1988. Aspects of numerical integration and summarization. Pages 625– 631 of: Bernardo, J. M., DeGroot, M. H., Lindley, D. V., and Smith, A. F. M. (eds.), Bayesian Statistics 3. Oxford: University Press. (Cited on page 52.)Google Scholar
Silverman, B. W. 1986. Density Estimation for Statistics and Data Analysis. London: Chapman and Hall.Google Scholar
Smith, A. F. M. 1991. Bayesian computation methods. Phil. Trans. R. Soc. Lond. A, 337, 369386.Google Scholar
Smith, A. F. M., and Gelfand, A. E. 1992. Bayesian statistics without tears. The American Statistician, 46, 8488. (Cited on page 58.)Google Scholar
Smith, B. 2007. BOA: An R package for MCMC output convergence assessment and posterior inference. Journal of Statistical Software, 21, 137. (Cited on pages 116, 198, and 202.)Google Scholar
Spiegelhalter, D. J. 1986. Probabilistic prediction in patient management and clinical trials. Statistics in Medicine, 5(5), 421433. (Cited on page 174.)Google Scholar
Spiegelhalter, D. J., Best, N. G., Carlin, B. P., and van der Linde, A. 2002. Bayesian measures of model complexity and fit (with discussion). Journal of the Royal Statistical Society, B, 64, 583639. (Cited on pages 78 and 79.)Google Scholar
Stan Development Team. 2014. RStan: The R Interface to Stan, Version 2.5.0. (Cited on page 186.)Google Scholar
Sturtz, S., Ligges, U., and Gelman, A. 2005. R2WinBUGS: a package for running WinBUGS from R. Journal of Statistical Software, 12(3), 116. (Cited on page 175.)Google Scholar
Tanner, M. A. 1996. Tools for Statistical Inference. 3rd edn. New York: Springer Verlag. (Cited on page 157.)Google Scholar
Tanner, M. A., and Wong, W. H. 1987. The calculation of posterior distributions by data augmentation. Journal of the American Statistical Association, 82(398), 528540. (Cited on page 105.)Google Scholar
Thall, P. F., Millikan, R. E., Müller, P., and Lee, S.-J. 2003. Dose-finding with two agents in phase i oncology trials. Biometrics, 59(3), 487496. (Cited on pages 126 and 127.)Google Scholar
Thomas, A., O’Hara, B., Ligges, U., and Sturtz, S. 2006. Making BUGS open. R News, 6(01), 1217. (Cited on page 172.)Google Scholar
Tibshirani, R. 1996. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society, B, 58(1), 267288. (Cited on page 134.)Google Scholar
Tierney, L. 1994. Markov chains for exploring posterior distributions. Annals of Statistics, 22, 17011728. (Cited on page 96.)Google Scholar
Tierney, L. 1996. Introduction to general state-space Markov chain theory. Pages 61–74 of: Gilks, W., Richardson, S., and Spiegelhalter, D. (eds.), In Markov Chain Monte Carlo in Practice. Chapman. (Cited on page 91.)Google Scholar
Tierney, L., and Kadane, J. 1986. Accurate approximations for posterior moments and marginal densities. Journal of The American Statistical Association, 81(03), 8286. (Cited on pages 154 and 162.)Google Scholar
Tierney, L., Kass, R., and Kadane, J. 1989. Fully exponential laplace approximations to expectations and variances of nonpositive functions. Journal of the American Statistical Association, 84(407), 710716. (Cited on pages 156 and 157.)Google Scholar
Umlauf, N., Adler, D., Kneib, T., Lang, S., and Zeileis, A. 2015. Structured additive regression models: An r interface to BayesX. Journal of Statistical Software, Articles, 63(21), 146. (Cited on pages 193, 194, and 195.)Google Scholar
Vehtari, A., and Ojanen, J. 2012. A survey of Bayesian predictive methods for model assessment, selection and comparison. Statist. Surv., 6, 142228.Google Scholar
Vehtari, A., Gelman, A., and Gabry, J. 2017. Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing, 27(5), 1413– 1432. (Cited on pages 188 and 191.)Google Scholar
Walker, A. M. 1969. On the asymptotic behaviour of posterior distributions. Journal of the Royal Statistical Society, B, 31(1), 8088. (Cited on page 151.)Google Scholar
Wasserman, L. 2004. All of Statistics. Springer-Verlag. (Cited on page 14.)Google Scholar
Watanabe, S. 2010. Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. Journal of Machine Learning Research, 11(Dec.), 35713594. (Cited on page 80.)Google Scholar
Welling, M., and Teh, Y. W. 2011. Bayesian learning via stochastic gradient Langevin dynamics. Pages 681–688 of: Proceedings of the 28th International Conference on International Conference on Machine Learning. Omnipress. (Cited on page 112.)Google Scholar
Zhang, Z., Chan, K. L., Wu, Y., and Chen, C. 2004. Learning a multivariate Gaussian mixture model with the reversible jump MCMC algorithm. Statistics and Computing, 14(4), 343355. (Cited on page 146.)Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×