[PromiseBook] / trunk / chap_freeconstraint.tex Repository: Repository Listing PromiseBook

# View of /trunk/chap_freeconstraint.tex

Wed Jan 19 20:20:17 2011 UTC (2 years, 3 months ago) by mark
File size: 51337 byte(s)
Initial import



\chapter{Action, freedoms and constraints}\label{chap_force}

This chapter develops $\mu$-promises further to include a description
of change in agents. It will lead us to the associated concept of
behaviour.  Change is the most elementary consideration in any
description of a world, whether it be human, economic or
mechanical. It allows us to define variation, which in turn allows us
to differentiate conditions and make measurements.  Our description of
change will be formal and instantly recognizable from natural science,
but we shall use it to study the kinematics and dynamics of systems
from the viewpoint of promises.

From a philosophical perspective, this chapter takes an important step
in modelling. It begins from a dispassionate and mechanistic physics
of phenomena, and extends that view slightly to admit human
{\em motivation}. It does so by using the promise concept and without undue
violence to the formulation\endnote{The ability to model human
motivations without tearing down the formalism of rigour is essential
to reconcile physical and social sciences. Many physical scientists
are rightly suspicious of social sciences and humanities when they
abandon the forms of clear expression with the explanation that humans
cannot be pinned down in that way. We maintain that necessary and
sufficient description of characters must form the basis for any
description of the world.}.

One satisfying aspect of this chapter is thus to present a model that
applies equally to inanimate entities as it does to living ones. By
replacing context specific modelling with general notions, we end up
with an approach that can be applied in equal measure to the natural
sciences or the social sciences.

\section{Configuration space}

Every systematic description of the world is a balance between allowed
freedoms and imposed constraints. Systems exist and change within a
given realm of possibility called a configuration space that desribes the
degrees of freedom' of the system. This realm is
usually described by a set of variables representing the independent
characteristics that distinguish it, such as position in space, or a
colour in the spectrum, etc. These variables can exist in a number of
{\em states}.  However, within this realm of maximal possibility, there
is usually a more limited region occupied by the system.

The extent of this actual region is dictated by {\em
constraints} upon it. Some of these constraints may be considered
external (like boundaries) and some are internal (like tethers).  Some
are voluntary and some are involuntary. These distinctions are not
always important, and can often be unified by thinking of the world in
terms of different kinds of promises.

A child that promises not to play in the road makes a voluntary
constraint. Other constraints seem more fundamental.  For example, in
astronomy the realm of possibility is the universe, with space and
time variables. For a game of squash, it is within the boxed four
walls of the court.  Then there are kinematical constraints, often
called equations of motion' for the states that represent the allowed ways that a
system evolves within this bounded region. For the game of squash
this would be a combination of the laws of physics (which cannot be
avoided) and the rules of the game (which are kept voluntarily).

As modellers we have a choice: do we represent these boundaries and
limitations as inevitable facts, or do we call them implicit promises
to obey laws on a voluntary basis? In many cases the distinction is
irrelevant as long as we maintain the same formal book-keeping of the
constraints. The distinction is mainly important to observers who
attempt to assess motivations -- and this applies mainly to human
sciences.  We can always model something involuntary as something
seemingly voluntary that is stubbornly reliable or persistent in its
choice\endnote{Physicists know that these laws' are not in fact
observed with one hundred percent reliability, due to experimental
errors and even environmental or quantum uncertainties. For a full
discussion, we must discuss the problem of measurement in chapter
\ref{chap_measure}}.

Sometimes we make modelling choices about where to draw the line
between what is internal or external to a problem. There is some
degree of arbitrariness to these choices. Usually one makes a choice
based on pragmatism; e.g., In the natural sciences, we often try to
limit the appearance of the outside world by talking about idealized
or closed systems, by which we mean that we intend to exclude all
distractions and isolate one specific issue. This is a convenience
that can bring great pedagogical simplification, allowing one to see
simple principles at work, thus it is not to  be frowned upon.
It is important not to confuse models with an objective truth' however.

\section{Change}

Implicit in several the examples we have looked at so far is a
description of how change occurs, from the perspective of promises and
outcomes. Let us now complete this picture, with a model of states.
To set the stage for the discussion, we shall begin by assuming:
\begin{itemize}

\item {\em Each agent's condition in the environment can be characterized by its state.}

The state of an agent can include internal variables and knowledge,
and it can represent outward (observable) characteristics. An agent is
assumed to be aware of its own state.  In fact we want it to be in a
position to make value judgements or {\em preferences} about the
different states.

\item {\em An agent's state changes in response to events.}.

Changes can be either continuous or discrete in principle, but we
shall focus on a discrete time theory in which there is a minimum
measureable time interval. A change can be viewed as the effect of an
event $e$.  An event now needs to be defined more carefully.

\item {\em Events will be assumed to be instantaneous'}.
In other words, an event persists for only a single unit of the
minimum time interval and then it is over.

\end{itemize}

\subsection{Events}

\begin{definition}[Event]
A transition from one state to another that is observed by an agent.
\end{definition}

\subsection{Encoding change using states}

To distinguish change we require a way of measuring it. For this we
need to introduce states. States are amongst the most primitive things
in our model of change. Promises and assessments do not necessarily
need to refer to states, but the outcomes of promises will lead to
physical consequences in the real world, and for that we must have names
for the different conditions in which a system finds itself.

Since states are familiar in many branches of science, we shall not
dwell too much on the their philosophy.  A state is simply a label which
represents a unique configuration of the system at hand.  We can
regard states as primitives, since we need some essential notion of a
system by which to form a measurable scale of change\endnote{For
example, we can only observe colour because our brains are able to
classify optical information at each point into mixtures of separately
assessable states: red, green and blue.  A ruler with millimetre
markings can only be created because matter can exist in different
coordinate states, labelled by distance from an origin. If one were
not able to distinguish red from green, observers would be
colour-blind. If one were not able to distinguish distance separation,
as when looking along the axis of separation (as with the stars in the
sky), then we would not be able to say which object was closer. If no
change in an environment (like the different states of position of a
clock's hands) is registered, we cannot even measure the passing of
time.}.

The set of states may be either ordered (or not) to represent the
specific properties required\endnote{One can of course model even this elementary
requirement with promises. If the labels are each considered to be
agents and each agent promises to depend on the precedence of another, then
they will order themselves naturally.}.

We shall use the vector notation $\vec q$ for states.
A state may be written in transpose forms:
\beq
\vec q_{\rm label} ~~{\rm or} ~~ \vec q^{\rm T}_{\rm label}.
\eeq

The set of labels may be quite arbitrary, but it is also useful to
make use of numbers, especially when ordering is required.  We
henceforth represent observables' as vectors of variables that may
take one or more values in a clearly defined set.

Since we are free to create a code-book that transforms any system of
labels into numbers, simply by counting\cite{shannon1}, we shall use
numerical labels where convenient, as this will allow us to step
through the possibilities by simple arithmetic, from an arbitrary
orgin $q_0$ to $q_1 \ldots q_n$. We
introduce step-operators that count through these states, up to a
normalization factor\endnote{Such stepping operators are well known in
group theory, e.g. for Lie algebras, where they are related to the
roots and weights of the Lie algebra.} that we shall not define here.

\beq
a_+\cdot  q_0 &\rightarrow& q_1\\
a_-\cdot   q_1 &\rightarrow& q_0\\
a_+\cdot   q_n &\rightarrow& q_{n+1}\\
a_-\cdot   q_n &\rightarrow& q_{n-1}
\eeq
The operators $a_\pm$ are sometimes called creation and annihilation
operators on the numerical value' of the state\endnote{In quantum
theory a particle'' is simply such a change-counter.}.  We note that
when assessing a change, an observer cannot speak of the reason or
cause of a change in general, since that information lies inside the
agent whose details are inaccessible.

\subsection{Equilibrium}

Promise equilibria may be viewed as pairs of bilateral promises
between agents.  The phenomenon is like symbiosis in biology.  This
mutual closure between promises is a basic topological configuration
that allows for the persistence of an operational relationship. When
this trade' of promises is stable over some time, the result is
equilibrium.

Equilibrium does not imply static fixture. Dynamic or
statistical equilibria describe properties that are in balance on
average. This is the more normal state of affairs, since noise from
the environment can never be completely shielded.

Equilbria can, themselves change, if the point of balance between what
is given and received changes.  Slow changes in the properties of the
agents or the environment can lead to a drift in the average
values. If this drift is slow enough to be distinguished from the
actual fluctuation exchanges themselves then we may call it {\em
adiabatic}. This means that there is weak enough coupling
between the fluctuation process and the process leading to average
drift to lead to a clean separation of time scales.

Systems that have strong coupling do not exhibit this property and are
much less predictable as a result\cite{renormalization,PLselforg}. The
interaction of scales is a vast topic that cannot be given a fair
treatment here. Suffice it to say that this is a crucial part of
behavourial description in any system and the promise binding
description allows us to understand this in a classic interaction
viewpoint.

It is easy to show that, for agents with
preferences, they correspond to Nash equilbria of two-person
games\endnote{In physics, equilibria play a dual role. They are
macrostates' in which there are no observed changes, and yet they
also form the the latent generators of time ordered processes. The so
called effective action, related to the free energy' in
thermodynamics, is the sum of all the different ways that nothing (on
average) can happen in the
system\cite{reif1,abbott1,burgesscovariant}. This generates a lot of
symmetries. By breaking any one of these symmetries, something will
happen. Thus the effective action equilibrium state is the starting
point for all the ways a system can change. The variation of an
equilibrium subject to time-ordered boundary conditions breaks
time-reversal symmetry and leads to equations of motion in a single
time direction. We can use the same principle in promises.  }.

Ironically, equilibria are a convenient way of summarizing the
information about change in a system. Suppose we collect all the
different ways in which a system can be in deadlock into a
series. This series is called the {\em effective action} of the
system\cite{reif1,abbott1,burgesscovariant}. It consists of promise
graphs that are stalemated in increasingly complex ways. Keeping any
one of the promises first in a deadlock, i.e. by breaking the symmetry
along one of the interfaces, will lead to action in the system, with a
definite arrow of time.

\subsection{From equilibrium to an arrow of time}

Consider a generator of a cycle (see fig. \ref{cycle})\endnote{A cycle
in this instance is somewhat similar to the idea of a plane wave in
physics. It is a basic Fourier process that represents a countable
quantum' unit. In quantum physics, this is often referred to
misleadingly as a particle.}, as discussed in section \ref{sb1}.  We
can express this more carefully now, in the language of assessments.
\beq
a_1 \promise{\pi_S: S / A(\pi_M)} a_2\\
a_2 \promise{\pi_M: M / A(\pi_S)} a_1
\eeq
In the previous formulation of our model for cooperation, agent $a_1$
promises a service $S$ if it assesses a promise of money $M$ to be
kept. Agent $a_2$ promises $a_1$ money $M$ if it assesses that it has
received service $S$.  This scenario represents a deadlock' of the
kind well know in commerce. Neither party can proceed until one of
them breaks the stalemate.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=3cm]{figs/BM}
\caption{A cycle' generator is two deadlocked promises back to back,
waiting for the deadlock symmetry to be broken.\label{cycle}}
\end{center}
\end{figure}
The model is more general than just a business interaction, of
course. It could describe any kind of pump or process. Consider a car
engine. There is a part that injects fuel if a flap is opened. Then
there is a part that ignites it (creating motion) if the fuel is
injected. These two processes are usually in deadlock, and have
two possible equilibria:

\begin{itemize}
\item Nothing happens -- both the fuel injector and the ignition
are at rest and remain so.
\item Both parts are in motion in a continuous cycle.
\end{itemize}
How does the system make a transition from one equilibrium to another?
Something must break the symmetry. Normally an external electric
starter motor initiates motion to break this deadlock. Sometimes
drivers will bump start' the car by pushing it and releasing the
clutch if the battery is dead. In either case, there is an act
of {\em symmetry breaking}.

Thus, to resolve the dilemma of how to get the pump started, the
symmetry between the players must be broken by an initial
condition'. Depending on which agent invokes this initial condition, one
might assess one of two possible patterns of outcome:
\beq
\pi_S: A(\pi_S)A(\pi_M)A(\pi_S)A(\pi_M)\ldots\\
\pi_M: A(\pi_M)A(\pi_S)A(\pi_M)A(\pi_S)\ldots
\eeq
In shorthand, we may write:
\beq
\pi_S: SMSMSMSM\ldots\\
\pi_M: MSMSMSMS\ldots
\eeq
Selecting one of these sequences is like selecting the arrow of time'
in the system -- deciding in which direction the cycle proceeds,
because reversing the order of who goes first looks very much like
making the loop go clockwise or anticlockwise, depending on who goes
first. In the long run, it does not matter one way or the other, as
once the memory of the initial condition is long gone, the sequences
are indistinguishable -- they visit all the same states in equal
number\endnote{In mathematics the of difference equations, the
solution has two parts: a particular integral' which describes the
transient response to the initial kickstart' condition, and the
continuous integral' or complementary function that describes the
steady state behaviour, once all memory of the transient response has
subsided. This is directly analogous to the situation here.}.

It is
a question of symmetry breaking\endnote{This is completely analogous
to the way physical law is expressed in terms of differential or
operator equations that are reversible, where in reality only one of
the directions is observed. The point is that time itself is not
unique -- it is defined by an ordered sequence of change.  That is why
is it possible to construct both advanced' and retarded' viewpoints
for matching phenomena to boundary conditions at different points
along a generated sequence.}.

\subsection{Broken symmetry}

\begin{enumerate}
\item A broken symmetry that selects a direction from some arbitrary
boundary value.
\item A set of states that may be used to label or measure the change.
\end{enumerate}

The arrow of time appears in promises through {\em pre-conditions}.
A conditional promise to act
\beq
A \promise{b|c} B
\eeq
with body $b$ and precondition $c$ leads to an ordering of $b$ and
$c$.

\section{Effective action of a promise}\label{effaction}

The set of such $\pm$-loop processes is called the effective action of
the system of ensemble of agents. It plays the role of a generator of
behaviour.  To begin with, there is no broken symmetry in this
generator so the behavioural outcome of the promises cannot be known
until a specific initial state is specified (see section \ref{loops}).

....

\section{Voluntary and involuntary constraints}

\subsection{Behaviour versus control}

In technological disciplines, it is common to assume that the outcome
of a system is to behave according to its design. In natural science,
this viewpoint is usually absent as the universe is assumed to be an
emergent outcome of processes that were not planned. Promises allow us
to capture both of these viewpoints and find a reasonable forum for them
to interact.

The behaviour of a system is not only about what is planned or
desired, but about what actually happens when a system develops in its
environment. This includes many unpredictable factors that need to be
described in a realistic model.

\subsection{Force, attack and involuntary change}

One of the issues which voluntary cooperation de-emphasises is the
notion of action by force. One can reason for this point of view in a
number of ways, moral, political, practical etc. However, the concept
of force is necessary; the perception of force does exist in nature and must
therefore be modelled.

The purpose of voluntary cooperation is not to reject such ideas of
physics, but to abstract its processes in form that allow one to see
similarities and principles more clearly, while rejecting distinctions
that are merely arbitrary.  A theory of agents must therefore acknowledge the
existence of force, if only to marginalize it. Force is one way that
one claims authority, the other is to ride on pre-existing norms.

A force is what causes an event to happen, whether we understand
its origin or not. Without forces there can be
no causality. The causal chain for change is the following:

\beq
action \rightarrow force \rightarrow event \rightarrow state-transition
\eeq
In other words, a decision to act is followed by the application of
a force which results in an event that potentially changes the state
of one or more agents.

If an action is applied by agent $A$ to agent $A'$ to force an event
that changes $A'$'s state, then we can say that $A'$ has been attacked
by $A$, forced or made to change state involuntarily.

\begin{definition}[Force]
An external or environmental influence that all agents are vulnerable to.
It is what causes involuntary events to happen.
\end{definition}

The environment itself can impose boundary conditions on the behaviour
of an agent, e.g. by putting it in a cage or when passing through a
tunnel.  This boundary values might also be considered forces.  We are
used to thinking that time is a force of involuntary change as it
seems that we have no way of avoiding the effects of time. However, this

\begin{definition}[Attack]
An attempt to force an agent to cooperate non-voluntarily.
\end{definition}

The creation of an involuntary obligation can thus be considered a form of attack.

Note that it is possible to model forces as agents (just as in the
past natural forces were attriuted to acts of the gods'' or to
the impact of particles'').

An obligation is different from a force. It is only an assessment of
another agent's intentions.  In this regard, the use of obligation in
security and management systems is somewhat misleading, as there is
the implication that an obligation leaves no choice to the agent.
This is reflected, for instance, in economics by the two prevailing
views of agency: The Theory of State and thence the Law of the State
assume the existence of irresistable force. Without this, one has only
voluntary cooperation to fall back on. The Theory of the Firm and the
market fits more closely the notion of voluntary economic cooperation.

\section{Laws of behaviour for autonomous agents}\label{lawbeh}

We expect to find laws of conservation and change in any theory of
behaviour. Intuitively one might easily expect the following:
\begin{enumerate}
\item An autonomous agent continues with uniform behaviour, unless it
accepts an influence from outside.

\item The observable behaviour of an agent is changed when
promising to act on input from an outside source (see section \ref{trajectory}).

\item Every external influence $+b = \langle +\tau,\chi_1\rangle$ promised
by an external agent must be met by an equal and opposite promise $-b = \langle -\tau,\chi_2\rangle$ in order to effect a change on the
agent.  If $\chi_1\not=\chi_2$, then the interaction is of magnitude
$\chi_1 \cap \chi_2$.
\end{enumerate}
We shall show that basic laws of this form do indeed apply. In addition,
one should expect behavioural properties of any ensemble of
agents to be guided by three things:
\begin{itemize}
\item The internal properties of the agents themselves.
\item The nature of agents' links or bonds (promises).
\item The boundary conditions of the environment and location in which
the agents evolve.
\end{itemize}

\subsection{Behavioural trajectories}\label{trajectory}

To discuss behaviour over time we need to notion of a {\em
trajectory}. This is the path taken by (i.e. the set of intermediate
states between the start and the current value of) an agent's
observables through a space of states that we may call configuration
space. It represents the past or future history of an agent's state
transitions. Let $\vec q$ be a vector of state information (which
might include position, internal registers and other
details)\cite{burgesstheory}. Such a trajectory begins at a certain
time $t_0$ with a certain coordinate value $\vec q_0$, known as the
{\em initial conditions}.

The trajectory of a single agent is then a parameterized function $\vec q(t,\vec\sigma)$, for some vector of parameters $\vec \sigma$ arriving from an outside source, and we identify the behaviour
of an isolated system as the triplet as the determined trajectory:
\beq
\langle \vec q_0,t_0,\hat O(\vec \sigma)\rangle, ~ t > t_0.
\eeq
The symbol $\hat O(\vec\sigma)$ is a constant transition matrix or
operator which takes $q(t_i)$ to $q(t_{i+1})$ for integer time index
$i$, or alternatively $q(t)$ to $q(t+dt)$ in a differential form. We
can think of this operator as being the generator of time slices,
advancing by one time step on each operation; $\hat O(\vec\sigma)$
therefore represents a steady state behaviour and any alteration to
this steady state behaviour must come about by a transformation $\hat O \rightarrow \hat O'$, which by the rules of algebraic invariance
must have the form $\hat O' = T^\dagger \hat O T$ for some matrix $T$
and dual-transpose representation $\dagger$\footnote{This is a linear
transformation. It is not certain that all transformations of the
operator need be linear, but we make this assumption here for later work
to extend. Ref. \cite{burgessC11} shows examples satisfying such linearity,
and our results here are only for linear mechanics.}.

In other words, any change in an agent's state (called its behaviour) is generated by
\beq
\vec q \rightarrow \vec q' = \vec q + \delta \vec q = \hat O(\vec \sigma)\vec q = (1 + \hat G(\vec\sigma))\vec q.\label{ops}
\eeq
i.e. $\delta \vec q = \hat G(\vec\sigma) \vec q$. $\hat G(\sigma)$ is
called the generator of the transition $\hat O$;  $\delta \vec q$ plays the role of a generalized
momentum or velocity', so that the dynamics state is represented by the
canonical pair $(\vec q,\delta \vec q)$.

We now have a simple transition matrix (or state machine) formalism
for describing the steady state behaviour of an agent, which results
from keeping its promises through the repeated action of a promise
keeping operator' $\hat O$. An agent whose observable properties do
not depend on any external circumstances has {\em exact} or {\em
rigid} rigid behaviour\cite{hendrickx1,hendrickx2}. It is possible if
and only if the agent has no use-promises that pertain to its
own behaviour ($-b$ for some $b$), and all other promises $+b'$ are exact
promises. In this case the internal change operator $\hat O$ cannot
depend on any external information.

\subsection{Outcomes and goals}

The notion of a trajectory as a representation of behaviour allows us
to be more precise about the meanings of other commonly used terms.
We define the {\em collective behaviour} of several agents simply as the
bundle (i.e. direct sum) of trajectories of the ensemble of agents.

An {\em outcome} (which can equally well refer to the outcome of a
promise or of a transition taken to keep a promise) can be described as a
single point $q(t_{\rm final})$ of the configuration space reached at
some final' time, along the trajectory of an agent. In other words it
is an identifiable end-point of an agent's behaviour.

A {\em goal} is then a set of one or more desired or acceptable
outcomes within an agent's own state space. In other words a goal is a bounded
space-time region that the agent would like its trajectory to intersect
(like the bull's-eye of a target).  A set of $\lbrace q(t)\rbrace$
possibly over some region $t_{\rm min} \le t \le t_{\rm max}$. This definition obeys the principle of autonomy, namely that
an agent may only promise its own behaviour; however it leaves open the question
of how an agent might desire to change its environment (which is outside of its
own state space). We come back to this point in section \ref{leak} since it
requires the notion of force.

We wish to point out that a goal cannot be an elementary concept like
the subject of a promise, since it requires a feedback loop to
achieve, which requires several promises. A goal requires knowledge of
the state to be reached and therefore the ability to observe the state
and its current state to know when intersection has occurred. As long
as the state is internal to the agent we can assume it can simply make
these promises itself. However, the situation is much more complex
where multiple agents are involved.  A collective goal requires all
agents to achieve a pre-arranged goal simultaneously.  This requires
not merely private promises but coordination and hence multiple
two-way communication between the agents.

Notice also that the concept of a goal requires the notion of a
value-judgement about what is desirable or acceptable. This is easily
provided in the promise framework if we always refer to the outcomes
of promises, but again it is highly complex where multiple agents are
involved.

Finally we should at least mention the notion of non-deterministic states,
i.e. macro-states in which a goal is achieved only on average over some
interval of time. A promise, after all, lasts for some time and is
verified perhaps several times. A promise therefore leads to a distribution
of outcomes in general, not merely a single state.
One may thus define an {\em equilibrium} as a goal that is
satisfied by a stable distribution over a sufficiently persistent
interval of time'. As this raises many questions to be answered about
the statistical mechanics of agents, we shall defer a full
discussion of statistical behaviour for later work.

Now, consider how an agent might exhibit behaviour that is based on
input from another agent.  To see how we might affect a change in this
behaviour generated by $\hat O$ we need to follow the straightforward
rules of matrix transformations.
Reactive or adaptive behaviour means that autonomous agents make promises
to accept input from an external agent. Thus the operator must be made
functionally dependent on the input $\hat O \rightarrow \hat O(I)$. This
requires a promise binding to accept input conditionally on its provision, e.g.:
\beq
a  &\promise{+O(I)/I} &a_{\rm ext}'\\
a &\promise{-I}      &a_{\rm ext}
\eeq
where $I$ represents a promise of input from an external agent, and
$O(I)$ represents a promise of some observable output to another
external agent which is conditionally a function of the input, and is
kept via the operation of $\hat O(I)$.

Let $I \rightarrow +\Sigma_\tau$ be the body of a promise to change the generator of
behaviour $\hat O$ from an external agent: i.e. $a_{\rm ext} \promise{+\Sigma_\tau} a_i$. The agent whose behavioural generator $\hat G$ is being
altered promises to accept the change with $a_i \promise{.\Sigma_\tau} a_{\rm ext}$ and
we denote a linear realization of the
operator which keeps the promise to use this transformation by the
external agent simply by $\Sigma_\tau$ so that we have:
\beq
\hat G_\tau \rightarrow \hat G' =
\Sigma_\tau^\dagger \hat G_\tau \Sigma_\tau.
\eeq
The generator of this transformation matrix can, in the usual way,
be written as $\sigma_\tau$ where $\Sigma_\tau = I+\sigma_\tau$, and
\beq
\delta \hat G_\tau = \hat G' - \hat G = \sigma^\dagger_\tau\hat G + \hat G\sigma_\tau + \sigma^\dagger_\tau\hat G\sigma_\tau
\label{firstlaw}
\eeq

What can we say about the transformation matrix? In order to satisfy the
principle of autonomy it must have the following properties.
Let us define a valuation of a promise known as the {\em outcome} by
the notation: $o(a_1 \promise{b} a_2)$. The outcome returns a value in
$[0,1]$ where 0 means not-kept and 1 means kept. Intermediate values
can be used for any purpose, such as statistical compliance.
Autonomy requires us to stipulate:
\beq
\Sigma_\tau \propto o( a \promise{-\Sigma} a_{\rm ext})\nonumber\\
\Sigma_\tau^\dagger \propto o( a \promise{+\Sigma} a_{\rm ext})
\eeq
so that
\beq
\delta G &\rightarrow& 0\nonumber\\
\Sigma_\tau &\rightarrow& 1, ~\Sigma_\tau^\dagger \rightarrow 1\nonumber\\
\sigma_\tau &\rightarrow& 0, ~\sigma_\tau^\dagger\rightarrow 0,
\eeq
when one of the binding promises with the external agent is not kept.
This means that, unless the promises to deliver an interaction influence
are honoured by both parties, then steady state behaviour persists.

The above boundary conditions are the only interpretation of
interaction that preserve the requirements of autonomy.

\subsection{Laws of change}

We now state the basic law of causation for behaviour in terms of the
autonomous promises of the agents, under the condition of autonomy.

\bigskip
\begin{law}[Law of Inertia]
An agent's observable properties hold a constant, deterministic
trajectory $\vec q(t)$ unless it also promises to use the value of an external
source $\vec\sigma$ to modify its transition matrix $\hat O(\vec\sigma)$.
\end{law}
\bigskip

\begin{proof}
This follows from eqn. (\ref{firstlaw}). Steady state trajectories
imply that $\delta G = 0$, which in turn requires that for small
changes $\sigma^T = 0$, which implies no promise of type $-\Sigma$.
\end{proof}

Put another way, each agent has access only to information promised to it, or already
internal to it.  A local promise $a_i \promise{f(\vec\sigma)} a_j$ that
depends on an externally promised parameter $\sigma$ is clearly a
conditional promise $a_i \promise{f(\vec\sigma)/\vec\sigma} a_j$, where
$\vec\sigma$ is the value promised by another agent. In order to acquire
the value of $\vec\sigma$, we require $a_i \promise{-\vec\sigma} a_j$ and a
corresponding promise to provide $\vec\sigma$ to $a_i$ either from the
environment or from another agent.  Thus, if an agent does not promise
to use any input $\vec\sigma$ from another agent, all of its internal
variables and transition matrices must be constant.

Note also that by the definitions in \cite{siri1}, a conditional
promise is only a promise when combined with a use-promise. This fits naturally
with the argument in the theorem.
\bigskip
\begin{corollary}[A conditional promise is not exact]
By reversing the theorem we see that a conditional promise must, by
definition have a residual degree of freedom, which is the
value of the dependent condition.
\end{corollary}

We can now state the interaction mechanics using
the formulations of the previous laws, and in
terms of clear statements about state transitions:

\bigskip
\begin{law}[Law of interaction]
The acceleration $\delta^2q$ of an agent's promise trajectory resulting
from a promise $a \promise{O/I} a'$ (i.e. the rate of change of its
generalized momentum $\delta q$) is proportional to the generalized
force $F=\delta \hat O=\delta \hat G$ promised by an external agent.
\end{law}
\bigskip
\begin{proof}
This now follows trivially from the transformation properties
and boundary conditions:
\beq
\delta_\tau \vec q = \hat G_\tau \vec q\label{x1}
\eeq
where $\hat G_\tau$ is the matrix valued generator of behaviours of
type $\tau$, see eqn. (\ref{ops}).
Under a change of $G$
\beq
\delta \vec q &=& \hat G \vec q \nonumber\\
\delta \vec q' &=& \hat G' \vec q
\eeq
thus
\beq
\delta^2\vec q = \delta \vec q' - \delta \vec q = (\hat G'-\hat G )\vec q = \delta\hat G\, \vec q.
\eeq
\end{proof}
\bigskip

\begin{law}[Transmitted force - reaction to influence]
The effective transmitted force due to a promise binding between two agents
is that which results from the outcome of the body-intersection of equal but opposite ($\pm$) promises
between the agents.
\end{law}
\bigskip
\begin{proof}
By the assumption of autonomy, the influence of
agent $a$ by $a_{\rm ext}$ is the conjunction of information
sent and information accepted: $influence = offer \wedge acceptance$.
This has an obvious set theoretic formulation\cite{siri1}.
From the rules of promise composition, the binding
\beq
a_{\rm ext} \promise{+\langle \tau,\chi_1\rangle} a\nonumber\\
a \promise{-\langle \tau,\chi_2\rangle} a_{\rm ext}
\eeq
has an outcome that satisfies:
\beq
o\left(
a_{\rm ext} \promise{+\langle \tau,\chi_1\rangle} a,
a \promise{-\langle \tau,\chi_2\rangle} a_{\rm ext}
\right) =\nonumber\\
o\left(
a_{\rm ext} \promise{+\langle \tau,\chi_1\cap\chi_2\rangle} a
\right)
o\left(
a \promise{-\langle \tau,\chi_1\cap\chi_2\rangle} a_{\rm ext}
\right),
\eeq
so that the interaction is the intersection of the
agents' promises to give and receive the influence.
\end{proof}
\bigskip
Thus we can say that the trajectory's transformation must have the form:
\beq
\stackrel{{\underbrace{~~\delta \hat O~~}} ~~\simeq~~}{\rm Force~~~~}~  \stackrel{\underbrace{o(a_{\rm ext}\promise{+\Sigma} a)}}{\rm field}\;
\stackrel{\underbrace{o(a \promise{-\Sigma^\dagger} a_{\rm ext})}}{\rm charge}
\eeq
There is a reassuring correspondence here with the physics of force
fields, which is a directly analogous construct, where $\pm$ charge
also labels which particles promise to respond to one another's field.
Promises appear like fields of influence whose values are sampled by
the action of measurement. Promised behaviour is represented by the
regular application of operators $\hat O$ on a state vector that
evolve the state and keep the promise. The outcome is unknown until
the act of verification is initiated, somewhat analogous to the
quantum theory of matter.

We emphasize that these laws derive directly from the assumptions of
autonomy, operational change and transition matrix formulation of the
agents. They are therefore beyond dispute and we would expect to find
this kind of law in any system of change with similar properties.

\section{Time and events}

Time is generally considered to be involuntary. However, the
importance of time is not the same in all problems. Time's relevance to a
problem depends on there being interactions that measure its
passing. A system in a steady, unchanging state has no knowledge of
time at that level of description.  Time is therefore not an absolute
quantity.

Promise theory is mainly about the analysis of epochs in which
promises are essentially fixed.  If basic promises change, we enter a
new epoch of the system in which basic behaviours change.  For a fixed
static set of promises, behaviour continues according to the same
basic pattern of interactions between agents and environment.  This is
not a limitation as we shall see in this and the following chapter,
but it brings a simple order to the analysis\endnote{We are just making
the same kind of approximation used in calculus of small differentials,
in which the background is assumed constant over each differential. If the
intervals are small enough compared to the rate at which the promises
are changing, then the approximation is exact.}.

\subsection{Time and action (relativity)}

It is conventional to think of time as being external to processes:
processes are said to take place {\em in} time. From an observational
viewpoint, the opposite is true. Without changes of state, time cannot
be observed, and that which cannot be observed can not be claimed to
exist for any observer.

\section{Forces, environment and external goals}\label{leak}

By the first law, systems are most predictable when completely
isolated from external forces. At the next level, their changes can be
predicted when coupling to external forces is weak. The stronger the
coupling between agents, the more unknown information enters each
agent. This can lead to disordered behaviour which requires more
information than is practical or available to understand.

In promise theory all agents begin by default in a state of isolation,
impervious to outside influence. It is only through their own promises
that they can volunteer to be influenced.

Three questions remain in the discussion: i) how do we explain
irresistible forces such as weather, power-failures and other acts of
god'? ii) How do we model the fact that an agent can affect its
environment, e.g. draw graffiti, move an object etc?  Finally, iii)
how do we model the presence of boundary conditions, or restrictions
over which agents have no choice? The concept of force is similar to
that of an attack:

\bigskip
\begin{definition}[Attack/Force]
An attempt to alter an agent's trajectory without its consent (i.e. in the
absence of a use-promise). This is a breach of autonomy.
\end{definition}
\bigskip

Let us consider these briefly for completeness, but defer a full
discussion for later work.  There are two classical approaches one
might take to modelling environmental forces. The first is to think of
the environment as simply one or more surrounding agents that
distinguish themselves by the magnitude of their influence. The
alternative is to treat external objects including the system boundary
as being something else'', i.e. some kind of external object that is
not an agent. To justify the latter approach we would have to extend
the framework of this paper to say what we mean by such an external
force and thus we avoid this in the present work. We wish instead to
give a very simple view of environmental interaction by treating the
environment as a single super-agent'' which promises to allow itself
to be changed by any agent and to which all agents have
voluntarily'' promised to be influenced. Although this is somewhat artificial\footnote{In fact it
is no more artificial than giving certain particles charge'' and
defining the notion of a field in physics.}, it allows us to continue
our simply formalism without unnecessary complications.

How can an agent move an object in promise theory? The state of the
object needs to be represented in a state space and we must be able to
discern its trajectory. If the object is of sufficient importance we
can model it as a separate agent that promises to allow itself to be
moved by another. Alternatively we can consider all such objects to
be mapped into the state space of an environmental super-agent. This super agent
can be influenced and can influence agents.

\bigskip
\begin{definition}[Leaky agents]
We define a {\em leaky agent} to be an agent making any promise to receive
information from the environment $E$,
$a_i \promise{-{\rm env}_i} E$.
\end{definition}
\bigskip

The study of real systems is therefore a study of leaky agents. The
environment itself is also leaky in the sense that it can be affected
by other agents. This is how we account for stigmergy for example.

With this view, we apply boundary conditions or coupling to the
environment by giving every agent a use-promise from this
environmental agent to allow some non-specified environmental
conditions to be explicitly modelled. The environment agent is assumed
to promise its information to all other agents.  This is also the way
to understand how agents making non-rigid promises can exhibit random
behaviour. In order to justify random behaviour we must explain how
disordered information enters the agents and selects values from
within the bounds of the inexact promises. This is the only mechanism
for exhibiting fluctuating behaviour.

By modelling forces using fictitious promises we can use the three
laws above to explain all changes in a system in a common
framework. Regardless of whether one finds this to one's taste, it is
a rather practical step for simple modelling.  We add finally that the
concept of a goal might now be extended to allow agents to desire
outcomes about states in the environment, not only in their own state
space. This is reasonable for any agent as long as it has a
use-promise from another agent to accept changes of state. However, a
goal is still not an elementary concept that can be the subject of a
promise -- it is an outcome that might emerge from the behaviour.

\section{Emergent behaviour and goals}

When is behaviour designed and when does it emerge? Promises are
designed but outcomes emerge. Leaky agents especially can be
influenced by environment and we cannot completely determine their
trajectories. We speak of emergence when we identify behaviour that
appears organized, but where no perceptible promises to account for

Many authors have fallen into the trap of using the terminology of
goals to describe emergence -- goals which the parts of the system are
incapable of knowing individually. This is a superfluous explanation which likely emerges
from the fictitious belief that programming determines real world behaviour. We have shown
that this is not the whole story and now offer a simple explanation
for emergent organization.

To understand emergence we must look to the spectrum of observable
outcomes of agents' promises.  Inexact promises allow for
unpredictability and the question is to understand whether organized
behaviour is likely, in spite of not being an agreed cooperative goal
of the agents. We have proposed that promises must be inexact to allow
for the possibility of unpredictable behaviour\cite{siriAIMS1} and that
the following simple definition of emergent behaviour
is plausible and captures the popular views in the literature.

\bigskip
\begin{definition}[Emergent behaviour]
Emergent behaviour is the set of trajectories belonging to leaky
agents exhibiting non-rigid, collective behaviour that is
observationally indistinguishable from organized behaviour.
\end{definition}
\bigskip

The important issue here is observational indistinguishability. It is
the end observer who looks for meaning' (i.e. a goal) in the
organized outcomes; the actual promises made by the agents could in
fact be anything that allows the observed outcome to arise. In other words
an outcome emerges' simply because it arises.

There are many mysterious definitions of
emergence in the literature but emergent behaviour can be understood
easily by looking for any promises that enable the observed outcome
and using algebraic reduction to account for behaviour as in section
\ref{remerge}, keeping firmly in mind the notion of observational
indistinguishability. After all, if emergent behaviour is real, it
should ultimately be measurable by the same standards as any other
kind of behaviour.

The key to emergence then is that the residual freedoms in the agent
promises (i.e. that are not constrained exactly), are selected from by
interaction with the environment, resulting in patterns of behaviour
that are unexpected, but which nevertheless lie within the bounds of
the promises given.

An example of emergent behaviour often cited is the idea of a {\em
swarm}.  Many definitions of swarms have been
Ours is simply as follows:

\bigskip
\begin{definition}[Emergent group or Swarm]
A collection of leaky agents that may be seen by any
external observer as exhibiting undifferentiated, collective behaviour.
\end{definition}
\bigskip

Which came first, the chicken or the egg?  Causation is not always an
appropriate description of phenomena in which there is no arrow of
time. Such phenomena are known in the natural sciences as
equilibria'.

Equilibrium can be a statistical phenomenon, so that
there are many changes, all happening in a forward temporal direction,
but the outcomes of these changes are measured in such a way that
there is change both forwards as backwards, and thus the average
affect is zero.

Equilibrium can also be exact', i.e. equal and opposite changes that
cancel instantaneously. This is true in static equilibrium' such as a
person not falling through the floor, since the weight of the person
is cancelled by the force of the floor pushing up (expressed by
Newton's third law). It can also be a dynamical cancellation, such as
in the interference of waves where peaks and troughs cancel (the
principle used in sound-cancelling headphones), or someone running
backwards down an escalator.

Consider the following chain of events.  An agent whom we call the
player' is asked to make a rational choice about whether to lift one
of two cups in front of him. By choosing either the left or the right
cup, he can claim a prize. The rational choice is (by definition) that
choice which maximizes his reward.

By analyzing the probabilities (as one would in game theory), it comes
to be known (by whatever means) that that the probable reward for
choosing the left hand cup is greater, so (rationally) that is the
best strategy. As long as the information is trustworthy, this is the
best solution and there is no argument. Game theory selects the best
strategy is choose the left cup'.

The suggestion of paradox occurs when one introduces another agent
that claims to be able to predict the future exactly -- not merely
statistically. This second agent, the predictor' makes a prediction
about what the player will do.  This prediction is assumed to be
correct. The game tree is depicted in fig. \ref{newcomb}, time is
measured downward.

\begin{figure}
\begin{center}
\begin{verbatim}
BEGIN

/     \
Predictor       L       R
/ \     / \
Player        L   R   L   R

Payoff        1   0  100  50
\end{verbatim}
\end{center}
\end{figure}

The figure depicts a traditional view of a game, drawn as a tree
because there is an implied causation, or arrow of time. Information
moves in the direction of the implied arrows (downward). But this is
not the case if the Predictor has certain knowledge of the future.

From the diagram, as drawn, we seem to imply that there are 4 possible
pathways:

\begin{center}
\begin{tabular}{ccc}
L& R& 0\\
L& L& 1\\
R& R& 50\\
R& L& 100\\
\end{tabular}
\end{center}

Without the prediction, "L" is always the "dominant strategy" in game
theory parlance. But with the predictor, there is no dominant strategy.

The paradox is supposedly this: if we ignore the predictor then $L$ is
the {\em dominant strategy}: 1 is better than 0, and 100 is better
than 50.  However, the highest payoff is 100, which can only be
reached by ignoring the advice of the predictor. Thus a rational agent
would have to defy certain knowledge of the future to get this reward
-- or put provocatively: the knowledge of the future forces the
rational agent to make an irrational decision.

It is easy to see that the source of the so-called paradox is careless
book-keeping of assumptions. Either a predictor knows the future or it
does not.  Either it is correct, or it only guesses.  As soon as we
add the genuine predictor that knows the future, there is a new {\em
absolute constraint} on the allowed transitions: namely that the
implies only two possiblities $L,L$ or $R,R$. With this constraint,
there is no paradox. The path $R,L$ is simply fictitious, because it
can never happen from the assumption of prescience.

Is this realistic? Well, the paradox scenario does not claim to be
realistic of course. It is meant to provoke criticism of the notion of
a rational agent. There are no perfect predictors of the future,
naturally. Suppose then that the predictor does not have certain
knowledge (only probable knowledge, as one might expect of a good
weather forecast), then we immediately fall back to a classic gambling

The confusion happens perhaps because game theory has no notion of
absolute constraint' that forces us to select either $L,L$ or $R,R$,
so sneaking a constraint in the back door brings confusion.  Moreover,
since there is then no dominant strategy $L$ or $R$ for a
fake-predictor, the scenario simply reduces to a problem of {\em
incomplete information}. The problem is under-constrained', and it
does not have a unique solution until some other inputs arrives to
further constrain things.

Newcomb's paradox is thus a confusion about the meaning of freedom and
constraint. It clothes itself in the garb of game theory (see chapter
\ref{gameschap}), where so-called rational agents that are supposed
to make choices about the future based on their expected rewards.  The
paradox however is a muddle about what kind of rationality one is
allowed to expect of a so-called rational agent, constrained to act in
a particular way by forces beyond its control. In other words, it is
about a lack of freedom to choose.

\subsection{Promise equilibrium in the paradox scenario}

Let us consider a very simple model of the scenario using promises.
The actor promises to act and the predictor promises to predict.
For the sake of argument, let us assume that they make these promises
to one another.

We can construct a proof that rationality is impossible under these constraints...