Sign up
Forgot password?
FAQ: Login

Sennott L.I. Stochastic Dynamic Programming and the Control of Queueing Systems

  • djvu file
  • size 8,41 MB
  • added by
  • info modified
Sennott L.I. Stochastic Dynamic Programming and the Control of Queueing Systems
John Wiley, 1999. — 349 p.
The subject of stochastic dynamic programming, also known as stochastic optimal control, Markov decision processes, or Markov decision chains, encompasses a wide variety of interest areas and is an important part of the curriculum in operations research, management science, engineering, and applied mathematics departments.
This book is unique in its total integration of theory and computation, and these two strands are interleaved throughout. First the theory underlying a particular optimization criterion (goal for system operation) is developed, and it is proved that optimal policies (rules for system operation that achieve an optimization criterion) exist. Then a computational method is given so that these policies may be numerically determined.
Stochastic dynamic programming encompasses many application areas. We have chosen to illustrate the theory and Computation with examples mostly drawn from the control of queueing systems. Inventory models and a machine replacement model are also treated. An advantage in focusing the examples largely in one area is that it enables us to develop these important applications in depth and to concomitantly expand the subject of control of queueing systems. However, the theory presented here is general and has applications in diverse subject areas.
The important background material is given in the appendixes. The appendixes are intended to be used as references, to be dipped into as needed. Some of the appendix material includes proofs. These are for the convenience of the interested reader and are not requisite to understanding the text.
The mathematical background necessary for comprehension of the text would be encompassed by a semester course on basic probability and stochastic processes, especially on the theory of Markov chains. However, since all the necessary background results are reviewed in the appendixes, the number of’ specific results the reader is expected to bring to the table is minimal. Perhaps most important for the reader is a bit of that famous ever-vague “mathematical maturity,” which is always helpful in understanding certain logical idea$ that recur in many of the arguments. The prospective student of this text should keep in mind that understanding the basic arguments in stochastic dynamic programming is a skill that is developed and refined with practice. It definitely gets easier as one progresses!
Optimization Criteria
Finite Horizon Optimization
Infinite Horizon Discounted Cost Optimization
An inventory Model
Average Cost Optimization for Finite State Spaces
Average Cost Optimization Theory for Countable State Spaces
Computation of Average Cost Optimal Policies for Infinite State Spaces
Optimization under Actions at Selected Epochs
Average Cost Optimization of Continuous Time Processes
A: Results from Analysis
B: Sequences of Stationary Policies
C: Markov Chains.
  • Sign up or login using form at top of the page to download this file.
  • Sign up
Up