Springer, 2011, -72 p.
Autonomy is arguably the most important feature of an intelligent agent since it dictates that the agent can make decisions on its own, without any outside help. In simple environments this is not difficult to achieve: a simple search through the possible actions and states will yield the best thing to do in every case, and the associated computation will be tractable. However, the situation changes drastically in complex environments. Agents in these conditions will often need to act under uncertainty; this means that they will not always be sure about what state they are in, nor will they be sure about the outcomes of their actions.
The problem of establishing the best mechanism by which an agent can make decisions has been widely studied, and several approaches have been formulated to tackle this problem in a wide variety of ways. In this work, we will focus on two models that have been proposed as ways to attack it. The first is the Belief-Desire- Intention (BDI) model, which will be discussed in Section 2.1; this model falls into the class of descriptive approaches — approaches that are based on analyzing the way that people or animals make decisions. The second model, the Markov Decision Process (MDPs)1, belongs to the class of prescriptive approaches — approaches that attempt to identify the optimal decision, that are typically based around Decision Theory (Raiffa., 1968). The necessary background for this model will be covered in Section 2.2.
Introduction1
Preliminary Concepts
An Empirical Comparison of Models
Evaluation
Related Work
Conclusions, Limitations, and Future Directions