Sign up
Forgot password?
FAQ: Login

Goodfellow Ian, Bengio Yoshua, Courville Aaron. Deep Learning Book

  • zip file
  • size 13,04 MB
  • contains epub document(s)
Goodfellow Ian, Bengio Yoshua, Courville Aaron. Deep Learning Book
MIT Press, 2016. — 802 p. — ISBN: 978-0-262-33737-3.
A comprehensive introduction to neural networks and deep learning by leading researchers in this field. Written for two main target audiences: university students (undergraduate or graduate) learning about machine learning, and software engineers.
This is a PDF compilation of an online book (www.deeplearningbook.org).
Who Should Read This Book?
Historical Trends in Deep Learning.
Applied Math and Machine Learning Basics
Linear Algebra
Scalars, Vectors, Matrices, and Tensors.
Multiplying Matrices and Vectors.
Identity and Inverse Matrices.
Linear Dependence and Span.
Norms.
Special Kinds of Matrices and Vectors.
Eigendecomposition.
Singular Value Decomposition.
The Moore-Penrose Pseudoinverse.
The Trace Operator.
The Determinant.
Example: Principal Components Analysis.
Probability and Information Theory
Why Probability?
Random Variables.
Probability Distributions.
Marginal Probability.
Conditional Probability.
The Chain Rule of Conditional Probabilities.
Independence and Conditional Independence.
Expectation, Variance, and Covariance.
Common Probability Distributions.
Useful Properties of Common Functions.
Bayes’ Rule.
Technical Details of Continuous Variables.
Information Theory.
Structured Probabilistic Models.
Numerical Computation
Overflow and Underflow.
Poor Conditioning.
Gradient-Based Optimization.
Constrained Optimization.
Example: Linear Least Squares.
Machine Learning Basics
Learning Algorithms.
Capacity, Overfitting, and Underfitting.
Hyperparameters and Validation Sets.
Estimators, Bias and Variance.
Maximum Likelihood Estimation.
Bayesian Statistics.
Supervised Learning Algorithms.
Unsupervised Learning Algorithms.
Stochastic Gradient Descent.
Building a Machine Learning Algorithm.
Challenges Motivating Deep Learning.
Deep Networks: Modern Practices
Deep Feedforward Networks
Example: Learning XOR.
Gradient-Based Learning.
Hidden Units.
Architecture Design.
Back-Propagation and Other Differentiation Algorithms.
Historical Notes.
Regularization for Deep Learning
Parameter Norm Penalties.
Norm Penalties as Constrained Optimization.
Regularization and Under-Constrained Problems.
Dataset Augmentation.
Noise Robustness.
Semi-Supervised Learning.
Multi-Task Learning.
Early Stopping.
Parameter Tying and Parameter Sharing.
Sparse Representations.
Bagging and Other Ensemble Methods.
Dropout.
Adversarial Training.
Tangent Distance, Tangent Prop, and Manifold Tangent Classifier.
Optimization for Training Deep Models
How Learning Differs from Pure Optimization.
Challenges in Neural Network Optimization.
Basic Algorithms.
Parameter Initialization Strategies.
Algorithms with Adaptive Learning Rates.
Approximate Second-Order Methods.
Optimization Strategies and Meta-Algorithms.
Convolutional Networks
The Convolution Operation.
Motivation.
Pooling.
Convolution and Pooling as an Infinitely Strong Prior.
Variants of the Basic Convolution Function.
Structured Outputs.
Data Types.
Efficient Convolution Algorithms.
Random or Unsupervised Features.
The Neuroscientific Basis for Convolutional Networks.
Convolutional Networks and the History of Deep Learning.
Sequence Modeling: Recurrent and Recursive Nets
Unfolding Computational Graphs.
Recurrent Neural Networks.
Bidirectional RNNs.
Encoder-Decoder Sequence-to-Sequence Architectures.
Deep Recurrent Networks.
Recursive Neural Networks.
The Challenge of Long-Term Dependencies.
Echo State Networks.
Leaky Units and Other Strategies for Multiple Time Scales.
The Long Short-Term Memory and Other Gated RNNs.
Optimization for Long-Term Dependencies.
Explicit Memory.
Practical Methodology
Performance Metrics.
Default Baseline Models.
Determining Whether to Gather More Data.
Selecting Hyperparameters.
Debugging Strategies.
Example: Multi-Digit Number Recognition.
Applications
Large Scale Deep Learning.
Computer Vision.
Speech Recognition.
Natural Language Processing.
Other Applications.
Deep Learning Research
Linear Factor Models
Probabilistic PCA and Factor Analysis.
Independent Component Analysis (ICA).
Slow Feature Analysis.
Sparse Coding.
Manifold Interpretation of PCA.
Autoencoders
Under complete Autoencoders.
Regularized Autoencoders.
Representational Power, Layer Size, and Depth.
Stochastic Encoders and Decoders.
Denoising Autoencoders.
Learning Manifolds with Autoencoders.
Contractive Autoencoders.
Predictive Sparse Decomposition.
Applications of Autoencoders.
Representation Learning
Greedy Layer-Wise Unsupervised Pretraining.
Transfer Learning and Domain Adaptation.
Semi-Supervised Disentangling of Causal Factors.
Distributed Representation.
Exponential Gains from Depth.
Providing Clues to Discover Underlying Causes.
Structured Probabilistic Models for Deep Learning
The Challenge of Unstructured Modeling.
Using Graphs to Describe Model Structure.
Sampling from Graphical Models.
Advantages of Structured Modeling.
Learning about Dependencies.
Inference and Approximate Inference.
The Deep Learning Approach to Structured Probabilistic Models.
Monte Carlo Methods
Sampling and Monte Carlo Methods.
Importance Sampling.
Markov Chain Monte Carlo Methods.
Gibbs Sampling.
The Challenge of Mixing between Separated Modes.
Confronting the Partition Function
The Log-Likelihood Gradient.
Stochastic Maximum Likelihood and Contrastive Divergence.
Pseudolikelihood.
Score Matching and Ratio Matching.
Denoising Score Matching.
Noise-Contrastive Estimation.
Estimating the Partition Function.
Approximate Inference
Inference as Optimization.
Expectation Maximization.
MAP Inference and Sparse Coding.
Variational Inference and Learning.
Learned Approximate Inference.
Deep Generative Models.
Boltzmann Machines.
Restricted Boltzmann Machines.
Deep Belief Networks.
Deep Boltzmann Machines.
Boltzmann Machines for Real-Valued Data.
Convolutional Boltzmann Machines.
Boltzmann Machines for Structured or Sequential Outputs.
Other Boltzmann Machines.
Back-Propagation through Random Operations.
Directed Generative Nets.
Drawing Samples from Autoencoders.
Generative Stochastic Networks.
Other Generation Schemes.
Evaluating Generative Models.
  • Sign up or login using form at top of the page to download this file.
  • Sign up
Up