Introduction to promise theory
Promise theory is a model of voluntary cooperation between individual,
autonomous actors or agents who publish their intentions to one
another in the form of promises. A promise is a declaration of intent
whose purpose is to increase the recipient's certainty about a claim
of past, present or future behaviour (see BergstraBurgess08).
For a promise to increase certainty, the recipient needs to trust the
promiser, but trust can also be built on the verification that
previous promises have been kept, thus trust plays a symbiotic
relationship with promises (see BergstraBurgess06).
Promise Theory was proposed by Mark Burgess in 2004 in order to solve
insurmountable problems present in obligation based computer
management schemes for Policy Based Management (see Burgess05).
However its usefulness was quickly seen to go far beyond computing.
Indeed the simple model of a promise used in Promise Theory can easily
address matters of Economics (see BurgessFagernes04)
and Organization (see BurgessFagernes07).
Promise Theory is not directly related to ideas about promises in Philosophy and
Promise Theory's point of departure from obligation logics (see Obligations, Deontic
Logic) is the idea that all agents in a system should have
autonomy of control -- i.e. that they cannot be coerced or forced into
a specific behaviour. Obligation theories in computer science often
view an obligation as a deterministic command that causes its proposed outcome.
In Promise Theory an agent may only make promises about its own
behaviour. For autonomous agents it is meaningless to make promises
about another's behaviour.
Although this assumption could be interpreted morally or ethically, in
Promise Theory this is simply a pragmatic engineering principle, which
leads to a more complete documentation of the intended roles of the
actors or agents within the whole. The reason for this is that, when
one is not allowed to make assumptions about others' behaviour, one is
forced to document every promise more completely in order to make
predictions; thus it leads to a more complete documentation which in
turn points out the possible failure modes by which cooperative behaviour
Command and control systems like those that motivate obligation theories
can easily be reproduced by having agents voluntarily promise to follow
the instructions of another agent (this is also viewed as a more
realistic model of behaviour). Since a promise can always be withdrawn,
there is no contradiction between voluntary cooperation and command
In Philosophy and Law a promise is often viewed as something that
leads to an obligation. Promise Theory rejects this point of view.
Bergstra and Burgess have shown that the concept of a promise is quite
independent of that of obligation and indeed is simpler. The role of
obligations in increasing certainty is unclear, since obligations can
come from anywhere and an aggregation of non-local constraints cannot
be resolved by a local agent: this means that obligations can actually
increase uncertainty. In a world of promises, all constraints on an
agent are self-imposed and local (even if they are suggested by
outside agents), thus all contradictions can be resolved locally.
The theory of commitments in Multi-Agent Systems has some similarities
with aspects of promise theory, but there are key differences. In
Promise Theory a commitment is a subset of intentions. Since a
promise is a published intention, a commitment may or may not be a
promise. A detailed comparison of Promises and Commitments in the
senses intended in their respective fields is forthcoming, and not a
Promises can be valuable to the promisee or even to the promiser.
They might also lead to costs. There is thus an economic story to tell
about promises. The economics of promises naturally motivate
`selfish agent' behaviour and Promise Theory can be seen as a motivation
for game theoretical decision making, in which multiple promises play the
role of a strategies in a game (see BurgessFagernes04).
In spite of the generality of Promise Theory, it was originally
proposed by Burgess as a way of modelling the computer management
software Cfengine's autonomous
behaviour (see Burgess05).
Existing theories based on obligation were unsuitable. Cfengine uses a
model of autonomy both as a way of avoiding distributed inconsistency
in policy and as a security principle against external attack: no
agent can be forced to receive information or instructions from
another agent: all cooperation is voluntary. For many users of the
software, this property has been instrumental in both keeping
their systems safe and adapting to the local requirements.