N.Y.: Springer Science+Business Media, 1996. — 183 p. — (Series Lecture Notes in Statistics, Vol. 118). — ISBN: 978-0-387-94724-2.
This book explores the Bayesian approach to learning flexible statistical models based on what are known as "neural networks". These models are now commonly used for many applications, but understanding why they (sometimes) work well and how they can best be employed is still a matter for research. My aim in the work reported here is two-fold - to show that a Bayesian approach to learning these models can yield theoretical insights, and to show also that it can be useful in practice. The strategy for dealing with complexity that I advocate here for neural network models can also be applied to other complex Bayesian models, as can the computational methods that I employ.
Priors for Infinite Networks.
Monte Carlo Implementation.
Evaluation of Neural Network Models.
Conclusions and Further Work.
A: Details of the Implementation.
B: Obtaining the software.