[PromiseBook] / trunk / chap_policy.tex Repository: Repository Listing PromiseBook

# View of /trunk/chap_policy.tex

Wed Jan 19 20:20:17 2011 UTC (2 years, 4 months ago) by mark
File size: 9198 byte(s)
Initial import


\chapter{Policy, Autonomy and Voluntary Cooperation}

This chapter is based on \cite{burgessdsom2005}

\section{The policy timescale}

A policy is a declaration, summarizing a number of resolutions made by
an agent. The function of a policy is to gather the results of a
complex decision-making process into a single convenient summary of
intent.  A policy can be thought of as defining what is admissable or
acceptable. It can therefore be regarded as a {\em constraint} on the
domain of possible intentions for the agent.

If we say policy' rather than decision', it implies more than a
passing determination, or spur-of-the-moment choice. A policy is
expected to have a more lasting value. Perhaps it enshrines a guiding
principle for the individual, which will preside over many situations
and encounters. Just as rules and laws are the cached results of
judicial process, policy is a way of avoiding the need to arrive at a
similar decision in each new case. Thus, by assumption policy is
considered to be {\em slowly changing}.

Using promise theory, we can develop a notion of distributed
or entangled policy for multiple agents working together, using only
the pure policies of individual agents.
A policy is merely a superposition of possible promises.

So policy can be tailored to the different roles and circumstances in
a division of labour, with agents at possibly different times and
locations in a collaborative network.

policy is about how agents interact, and once agents can interact, two
agents can interact together in a way that is acceptable to them, but
not to a third party. This problem of {\em policy conflict} is the
pernicious problem that has serious problems in an obligation
picture, but which we shall resolve simply in the promise picture.

\section{Policy versus decision theory}

Policy should not be confused with optimization or rational decision
theory. In decision theory, or game theory, decisions are made
rationally by maximizing a utility function\cite{morgenstern1}.  A
policy decision is distinct from an optimization because it is made by
agents, who can decide to behave either rationally or irrationally,
and might not have decided on an appropriate utility function.

In optimization one tries to {\em compute} the best course of action,
using a pre-decided criterion or model. The result of the optimization
is a rational computation, with (hopefully) only one answer---the
element of decision has been removed by a standardized
procedure. There are several meta-level policy decisions surrounding this
however: the decisions about whether to use optimization or not, which
model is correct, and which answer (if there are several) to use, are
matters for policy to decide. policy can therefore be informed and
guided by optimization, but in the final analysis the choice is {\em

The construction of a distributed policy from individual agent
promises is precise and can be understood graphically. Such an atomic
reconstruction provides a framework in which to reason about the
distributed effect of policy. Immediate applications include resolving
the problem of policy conflicts in autonomous computer networks.

\section{Rule and law, Norms and requirements}

One of the problems in discussing policy based management of
distributed systems\cite{sloman3,lupu1} is the assumption that all
agents will follow a consistent set of rules. For this to be true, we
need either an external authority to impose a consistent policy from a
bird's eye view, or a number of independent agents to collaborate in a
way that settles on a consistent' picture autonomously.

Political autonomy is the key problem that one has to deal with in ad
hoc / peer-to-peer networks, and in pervasive computing.  When the
power to decide policy is delegated to individuals, orders cannot be
issued from a governing entity: consistency and concensus must arise
purely by {\em voluntary cooperation}. There is no current model for
discussing systems in this situation.

In this chapter outline a theory for the latter, and in doing so provides
a way to achieve the former. The details of this theory require
a far more extensive and technical discussion than may be presented in
this short contribution; details must follow elsewhere.

It has been clear to many authors that the way to secure a clear and
consistent picture of policy, in complex environments, is through the
use of formal methods.  But what formalism do we have to express the
necessary issues?  Previous attempts to discuss the consistency of
distributed policy have achieved varying degrees of success, but have
ultimately fallen short of being useful tools except in rather limited
arenas.  For example:

\begin{itemize}
\item Modal logics: these require one to formulate
hypotheses that can be checked as true/false propositions. This is not
the way most planners or designers work.

It is a stress-testing procedure or a destructive testing method.

\item The $\pi$-calculus: has attractive features but focuses on issues
that are too low-level for management. It describes systems in terms
of states and transitions rather than policies (constraints about
states)\cite{parrow1}.

\item Implementations like IPSec\cite{fuwu1,lisa0119},
Ponder\cite{ponder} etc. these do not take
explicitly into account the autonomy of agents and thus while these implement
policies well enough, they are difficult to submit to analysis.
\end{itemize}

In each of the latter examples, one tends to fight two separate battles: the
battle for an optimal mode of expression and the battle for an
intuitive interface to an existing system.
For example, consider a set of files and directories, which we want to
have certain permissions. One has a notion of policy as a
specification of the permission attributes of these files. Policy suggests
that we should group items by their attributes. The existing system
has its own idea of grouping structures: directories.
A simple example of this is the following:
\small
\begin{alltt}
ACL1:                           ACL2:
\end{alltt}
\normalsize
Without clear semantics (e.g. first rule wins) there is now an
ordering ambiguity.  The two rules overlap in the specifically named
file'', because we have used a description based on overriding the
collection of objects implicitly in directory''.

In a real system, a directory grouping is the simplest way to refer to
this collection of objects. However, this is not the correct
classification of the attributes: there is a conflict of interest.
How can we solve this kind of problem?

In the theory of system maintenance\cite{burgesstheory}, one builds up
consistent and stable structures by imposing independent, atomic
operations, satisfying certain constraints\cite{lisa0163,burgessC11}.
By making the building blocks primitive and having special properties,
we ensure consistency.  One would like a similar construction for all
kinds of policy in human-computer management, so that stable
relationships between different activities can be constructed without
excessive ambiguity or analytical effort.  This paper justifies such a
formalism in a form that can be approached through a number of
simplifications. It can be applied to network or host configuration,
and it is proposed as a unifying paradigm for autonomous management with
cfengine\cite{burgessC1}.

\section{Policy with autonomy}

By a policy we mean the ability to assert arbitrary constraints of the
behaviour of objects and agents in a system.  The most general kind of
system one can construct is a collection of objects, each with its own
attributes, and each with its own policy. A policy can also be quite
general: e.g.

In a network of {\em autonomous} systems, an agent is only concerned
with assertions about its own policy; no external agent can tell
it what to do, without its consent. This is the crucial difference
between autonomy and centralized management, and it will be the
starting point here (imagine privately owned devices wandering
around a shopping mall).

\begin{assume}[Autonomy]
No agent can force any other agent to accept or transmit
information, alter its state, or otherwise change its behaviour.
\end{assume}
(An attempt by one agent to change the state of another might be
regarded as a definition of an attack.)
This scenario is both easier and harder to analyze than the
conventional assumption of a system wide policy. It is easier, because
it removes many possible causes of conflict and inconsistency. It is
harder because one must then put back all of that complexity, by hand,
to show how such individual agents can form collaborative structures,
free of conflict.

The strategy in this paper is to decompose a system into its
autonomous pieces and to describe the interactions fully, so that
inconsistencies become explicit. In this way, we discover the emergent
policy in the swarm of autonomous agents.

`