Sign up
Forgot password?
FAQ: Login

Whiteson S. Adaptive Representations for Reinforcement Learning

  • pdf file
  • size 4,18 MB
  • added by
  • info modified
Whiteson S. Adaptive Representations for Reinforcement Learning
Series: Studies in Computational Intelligence (Book 291). — Springer, 2010. — 127 p.
ISBN: 978-3642139314, e-ISBN: 978-3642139321.
This book presents new algorithms for reinforcement learning, a form of machine learning in which an autonomous agent seeks a control policy for a sequential decision task. Since current methods typically rely on manually designed solution representations, agents that automatically adapt their own representations have the potential to dramatically improve performance. This book introduces two novel approaches for automatically discovering high-performing representations. The first approach synthesizes temporal difference methods, the traditional approach to reinforcement learning, with evolutionary methods, which can learn representations for a broad class of optimization problems. This synthesis is accomplished by customizing evolutionary methods to the on-line nature of reinforcement learning and using them to evolve representations for value function approximators. The second approach automatically learns representations based on piecewise-constant approximations of value functions. It begins with coarse representations and gradually refines them during learning, analyzing the current policy and value function to deduce the best refinements. This book also introduces a novel method for devising input representations. This method addresses the feature selection problem by extending an algorithm that evolves the topology and weights of neural networks such that it evolves their inputs too. In addition to introducing these new methods, this book presents extensive empirical results in multiple domains demonstrating that these techniques can substantially improve performance over methods with manual representations.
Motivation
Approach
Overview
Reinforcement Learning
Reinforcement Learning Framework
Temporal Difference Methods
Policy Search Methods
On-Line Evolutionary Computation
Epsilon-Greedy Evolution
Softmax Evolution
Interval Estimation Evolution
Testbed Domains
Evolutionary Function Approximation
NEAT+Q
Results:
Comparing Manual and Evolutionary Function Approximation
Combining On-Line Evolution with Evolutionary Function Approximation
Comparing to Other Approaches
Comparing Darwinian and Lamarckian Approaches
Continual Learning Tests
Sample-Efficient Evolutionary Function Approximation
Sample-Efficient NEAT+Q
Automatic Feature Selection for Reinforcement Learning
FS-NEAT
Testbed Domain
Adaptive Tile Coding
Method: When to Split, Where to Split
Testbed Domains
Related Work
Optimizing Representations
Combining Evolution and Learning
Balancing Exploration and Exploitation
Feature Selection
Primary Conclusions
Negative Results
Broader Implications
Future Work
Final Remarks
A Statistical Significance
  • Sign up or login using form at top of the page to download this file.
  • Sign up
Up