Tutorial 3

Belief Revision: From AGM to Computational Models

Abstract:

Belief revision is the process of transforming a belief state on receipt of new information and plays an important role in understanding and devising automated agents. In the AGM paradigm (due to Alchurron, Gardenfors, and Makinson), the belief state of an agent is represented by a belief set, a set of formulas closed under logical consequence. Belief sets suffer from problems associated with representational and computational intractability.

To alleviate these problems, some authors have proposed the use of finite sets (called belief bases) to represent belief states. This alternative has been extensively studied and AGM-like operations have been defined for belief bases. Although the use of belief bases solves the problem of representing a belief state, belief bases are typically quite large and the belief change operations make use of computationally expensive consistency checks.

Conventional AGM-style revision provides an elegant and powerful framework for reasoning about how a rational agent should change his beliefs when confronted with new information, but it tells us very little about how that agent could really perform such belief changes. Several important issues were not satisfactorily dealt with. Of these, two have received a lot of attention in the last few years: iterated revision and selection/preference mechanisms. These issues are intimately related, since the main problem for iterated revision is that AGM-style operations do not provide a selection mechanism for the revised belief state.

Another problem of traditional belief revision is that the rational agent described is a highly idealized one, a perfect reasoner with unbounded memory, logical ability, no inconsistent beliefs and no time constraints. More realistic models should be inconsistency tolerant and should try to reduce the size of the set to be explored. Intuitively, not all of an agent's beliefs are relevant for deciding what to do with new information. There should be a way of isolating the subset of a belief base that contains the relevant beliefs for a query or an operation of belief change.

Recent models attempt to tackle the problem of plausible belief revision by using nonstandard inference operations and structuring belief bases according to some notion of relevance. These ideas can also be combined with the use of approximate inference relations, which offer us partial solutions at any stage of the revision process. The quality of the approximations improves as we allow for more and more resources to be used.

In this tutorial, we first introduce the classical paradigm for belief revision, using both belief sets and belief bases as representation for an agent's belief state. We show the shortcomings of the classical approach and present some of the alternatives or refinements of AGM theory which do not suffer from (all) these shortcomings.