Yale University, 167 p.
Introduction
Markov Decision Processes (MDP’s) and the Theory of Dynamic ProgrammingDefinitions of MDP’s, DDP’s, and CDP’s
Bellman’s Equation, Contraction Mappings, and Blackwell’s Theorem
Error Bounds for Approximate Fixed Points of Approximate Bellman Operators
A Geometric Series Representation for MDP’s
Examples of Analytic Solutions to Bellman’s Equation for Specific Test Problems
Euler Equations and Euler Operators
Computational Complexity and Optimal AlgorithmsDiscrete Computational Complexity
Continuous Computational Complexity
Computational Complexity of the Approximation Problem
Numerical Methods for Contraction Fixed Points
Numerical Methods for MDP’sDiscrete Finite Horizon MDP’s
Discrete Infinite Horizon MDP’s
Continuous Finite Horizon MDP’s
Continuous Infinite Horizon MDP’s