MIT Press, 1990. — 176 p.
The Computational Complexity of Machine Learning is a mathematical study of the possibilities for efficient learning by computers. It works within recently introduced models for machine inference that are based on the theory of computational complexity and that place an explicit emphasis on efficient and general algorithms for learning.
Theorems are presented that help elucidate the boundary of what is efficiently learnable from examples. These results take the form of both algorithms with proofs of their performance, and hardness results demonstrating the intractability of learning in certain natural settings. In addition the book contains lower bounds on the resources required for learning, an extensive study of learning in the presence of errors in the sample data, and several theorems demonstrating reducibilities between learning problems.
Definitions and Motivation for Distribution-free Learning
Recent Research in Computational Learning Theory
Tools for Distribution-free Learning
Learning in the Presence of Errors
Lower Bounds on Sample Complexity
Cryptographic Limitations on Polynomial-time Learning
Distribution-specific Learning in Polynomial Time
Equivalence of Weak Learning and Group Learning
Conclusions and Open Problems