Sign up
Forgot password?
FAQ: Login

Murphy K.P. Machine Learning: A Probabilistic Perspective

  • djvu file
  • size 14,33 MB
  • added by
  • info modified
Murphy K.P. Machine Learning: A Probabilistic Perspective
MIT Press, 2012, -1098 p.
With the ever increasing amounts of data in electronic form, the need for automated methods for data analysis continues to grow. The goal of machine learning is to develop methods that can automatically detect patterns in data, and then to use the uncovered patterns to predict future data or other outcomes of interest. Machine learning is thus closely related to the fields of statistics and data mining, but differs slightly in terms of its emphasis and terminology. This book provides a detailed introduction to the field, and includes worked examples drawn from application domains such as molecular biology, text processing, computer vision, and robotics.
This book is suitable for upper-level undergraduate students and beginning graduate students in computer science, statistics, electrical engineering, econometrics, or any one else who has the appropriate mathematical background. Specifically, the reader is assumed to already be familiar with basic multivariate calculus, probability, linear algebra, and computer programming. Prior exposure to statistics is helpful but not necessary.
This books adopts the view that the best way to make machines that can learn from data is to use the tools of probability theory, which has been the mainstay of statistics and engineering for centuries. Probability theory can be applied to any problem involving uncertainty. In machine learning, uncertainty comes in many forms: what is the best prediction (or decision) given some data? what is the best model given some data? what measurement should I perform next? etc.
The systematic application of probabilistic reasoning to all inferential problems, including inferring parameters of statistical models, is sometimes called a Bayesian approach. However, this term tends to elicit very strong reactions (either positive or negative, depending on who you ask), so we prefer the more neutral term probabilistic approach. Besides, we will often use techniques such as maximum likelihood estimation, which are not Bayesian methods, but certainly fall within the probabilistic paradigm.
Rather than describing a cookbook of different heuristic methods, this book stresses a principled model-based approach to machine learning. For any given model, a variety of algorithms can often be applied. Conversely, any given algorithm can often be applied to a variety of models. This kind of modularity, where we distinguish model from algorithm, is good pedagogy and good engineering.
We will often use the language of graphical models to specify our models in a concise and intuitive way. In addition to aiding comprehension, the graph structure aids in developing efficient algorithms, as we will see. However, this book is not primarily about graphical models; it is about probabilistic modeling in general.
Probability
Generative models for discrete data
Gaussian models
Bayesian statistics
Frequentist statistics
Linear regression
Logistic regression
Generalized linear models and the exponential family
Directed graphical models (Bayes nets)
Mixture models and the EM algorithm
Latent linear models
Sparse linear models
Kernels
Gaussian processes
Adaptive basis function models
Markov and hidden Markov models
State space models
Undirected graphical models (Markov random fields)
Exact inference for graphical models
Variational inference
More variational inference
Monte Carlo inference
Markov chain Monte Carlo (MCMC) inference
Clustering
Graphical model structure learning
Latent variable models for discrete data
Deep learning
  • Sign up or login using form at top of the page to download this file.
  • Sign up
Up