[PromiseBook] / trunk / chap_logic.tex Repository: Repository Listing PromiseBook

# View of /trunk/chap_logic.tex

Wed Jan 19 20:20:17 2011 UTC (2 years, 4 months ago) by mark
File size: 29176 byte(s)
Initial import


\chapter{Reasoning about $\mu$-promises}

To secure a clear and consistent model of intent we must use formal
methods, but what formalism do we have to express the necessary
issues?  Previous attempts to discuss the consistency of distributed
policy have achieved varying degrees of success, but have ultimately
fallen short of being useful tools except in rather limited arenas.
For example:

\begin{itemize}
\item Modal logics: these require one to formulate
hypotheses that can be checked as true/false propositions. This is not

\item The $\pi$-calculus: has attractive features but focuses on issues
that are too low-level for our main purposes. It describes systems in
terms of states and transitions rather than policies (constraints

\end{itemize}

\section{Promise consistency and scope}

How can agent promises be inconsistent? Is a conflict of intention
enough to speak of inconsistenct?  In fact it is not, because we must
deal with the problem of scope. Two agents might have conflicting
intentions with respect to some imaginary higher purpose, but have no
knowledge of each other or this imaginary purpose.

For example: A intends to have his son take over his trading business
when he retires. A's son intends to go to college and become a
scientist. A's wife intends to have him marry the daughter of a friend
and work the land to support them. Are any of these intentions
consistent or inconsistent? What imaginary goals does one measure such
consistency against?

Promises may only be made by a specific agent, so a more careful
question is to ask how can an agent be inconsistent in making its
promises?  At the extreme pole where every agent is independent and
sees only see its own world, there would be no need to speak of
inconsistency: unless agents have promised to behave in a similar
fashion, they will do as they please. This is the property of a local
theory.

\begin{lemma}[Locality or promises of the first kind]
An agent cannot make an elementary promise (of the first kind) about
any agent other than itself, or its own behaviour.
\end{lemma}

A contradiction can still occur if a single agent promises two
contradictory things.  We call such a case incompatible or even broken
promises. When all information is promised by (and hence localized in) a
single agent, there is no need to go beyond that agent to look for
inconsistencies. If a specific agent promises to marry and
not to marry, then there is an obvious conflict. the locality means
we only need to look in one place for this.

\begin{definition}[Inconsisent promises]
A promise of
$b_1$ from agent $A_1$ to agent $A_2$ is said to be inconsistent if there
exists another promise from $A_1$ to $A_2$, of $b_2$, in which
$b_1\not=b_2$.
\end{definition}
In the worst case, one could consider promising inconsistent promises
to be a case of breaking both promises.
This definition is very simple, and becomes most powerful when one
identifies promise {\em types}. It
implies that an agent can only break its own promises: if an agent
promises two different things, it has broken both of its promises.

\section{Indirection - a problem for deceptions}

We have said earlier that the promise body should not normally contain
references to agents. When agents' names are absent from the promise
body, the logic is very simple. This is a desirable property.

But there are promises that cannot be expressed like this.
Some complicated promises can be made (like the transference of
responsibility example in \cite{burgessdsom2005}). However there
are cases where we cannot avoid referring to third parties.
e.g.
\begin{itemize}
\item I promise I love you.
\item I promise I do not love Mary.
\item I promise Mary I do not love you.
\end{itemize}
Or I promise to you not to promise X to Y'', etc.
These levels of complexity must be built up step by step.

This problem cannot be avoided if we keep to the rule that the promise
body may not refer to other agents, since one could reframe the promises
to avoid mentioning by name and providing the data about whom I love''
as a service.

This matter can be resolved as incompatible promises.
We need some kind of taxonomy of promises that are related and incompatible.
Promises need to be classified into some kind of taxonomy/ontology
in order to know when certain promises are incompatible. e.g.

I promise I love you''

I promise Mary I don't love you''

These are the same promise type. There is no impediment to promising this, but
the result is, of course, necessarily a lie.

\subsection{Di-graphs}

In our definition of lies, we made an unwarranted assumption.
There is no probelm with the following:
\beq
\begin{array}{c}
A_1 \promise{\pi} A_2\\
A_1 \promise{\neg \pi} A_3
\end{array}
\eeq
$A_1$ can promise constadictory things to different agents without
making any contradiction, provided the promise body does not refer
to agents.

\subsection{Tri-graphs}

This is a lie to either $A_2$ or $A_3$, i.e. the assertion of two or
more inconsistent promises to a promisee, about the behaviour of
promising agent towards either the promisee or towards a third-party.
\beq
\begin{array}{c}
A_1 \promise{\rm Love~ you} A_2\\
A_1 \promise{\rm Don't~ love~ A_2} A_3
\end{array}
\eeq

\section{Promise analysis}

Logic is a way of analysing the consistency of assumptions.  It is
based on the truth or falsity of collections of propositions
$p_1,p_2,\ldots$. One must formulate these propositions in advance
and then use a set of assumptions to determine their status.
The advantage of logic is that is admits the concept of a proof.

Is there a logic that is suitable for analyzing promises?  Modal logic
has been considered as one possibility, and some authors have made
progress in using modal logics in restricted
models\cite{ortalo1,glasgow1}. However, there are basic problems with
modal logics that limit their usefulness\cite{lupu2}.

More pragmatically, logic alone does not usually get us far in
engineering.  We do not usually want to say things like it is true
that $1+1 = 2$''? Rather we want a system, giving true answers, which
allows us to compute the value of $1+1$, because we do not know it in
Ultimately we would like such a calculational framework for combining
the effects of multiple promises.  Nevertheless, let us set aside such
practical considerations for now, and consider the limitations of modal
logical formalism in the
presence of autonomy.

\subsection{Modal Logic and Kripke Semantics}

Why have formalisms for finding inconsistent policies proven to be so
difficult?  A clue to what is going wrong lies in the many worlds
interpretation of the modal logics\cite{modallogic}.
In the modal logics, one makes propositions $p,q$ etc., which are
either true or false, under certain interpretations. One then introduces
modal operators that ascribe certain properties to those
propositions, and one seeks a consistent language of such strings.

Modal operators are written in a variety of notations, most often with
$\boxx$ or $\diamond$.  Thus one can say $\boxx p$, meaning it is
necessary that $p$ be true'', and variations on this theme:

\begin{center}
\begin{tabular}{ll}
\hline
$\boxx p$ & $\diamond p = \neg \boxx \neg p$\\
\hline\hline
It is necessary that $p$ & It is possible that $p$\\
It is obligatory that $p$ & It is allowed that $p$\\
It is always true that $p$ & It sometimes true that $p$\\
\end{tabular}
\end{center}
A system in which one classifies propositions into obligatory'',
allowed'' and forbidden'' could easily seem to be a way to codify
policy, and this notion has been
explored\cite{ortalo1,glasgow1,deontic,lupu2}.

Well known difficulties in interpreting modal logics are dealt with using
Kripke semantics\cite{kripke1}.
Kripke introduced a validity function' $v(p,w)\in\{T,F\}$, in which
a proposition $p$ is classified as being either true of false in a specific
world' $w$. Worlds are usually collections of observers or agents in a system.

Consider the formulation of a logic of promises, starting with the
idea of a promise' operator.
\begin{itemize}
\item $\boxx p =$ it is promised that $p$ be true.
\item $\diamond p = \neg\boxx\neg p =$ it is unspecified whether $p$ is true.
\item $\boxx \neg p =$ it is promised that $p$ will not be true.
\end{itemize}
and a validity function $v(\cdot,\cdot)$.

\subsection{Further problems with modal logic}

Some modal logics begin the with assumption of idempotence of the core
operators. For example, one assumes that $\boxx\rightarrow \boxx p \boxx p$. Many justifications for this have been attempted; indeed,
the left hand side can be phrased is several ways in English:

\begin{itemize}
\item It is necessary that $p$ be necessary.
\item $p$ is necessarily a necessity.
\item It is required that $p$ be necessary.
\item $p$ must be necessary.
\item $p$ must be a requirement.
\end{itemize}
In each of these cases, the truth of this relation (the necessity of
necessity) seems to be simply false in a world of autonomous promises,
thus we reject this form of reasoning, autonomously and voluntarily
(see section \ref{invol}).

\subsection{Single promises}

A promise is usually shared between a sender and a recipient. It is
not a property of agents, as in usual modal logics, but of a pair of
agents.  However, a promise has implications only directly on the behaviour of the
promiser, not the promisee.

Consider the example of the Service Level Agreement, above, and let
$p$ mean Will provide data in less than 10ms''. How shall we express
the idea that a node $A_1$ promises a node $A_2$ this proposition?
Consider the following statement:
\beq
\boxx p, ~~~v(p,A_1) = T.
\eeq
This means that it is true that $p$ is promised at node $A_1$, i.e. node 1
promises to provide data in less than 10ms -- but to whom? Clearly, we must also
provide a recipient. Suppose, we try to include the recipient in the same world
as the sender? i.e.
\beq
\boxx p, ~~~v(p,\{A_1,A_2\}) = T.
\eeq
However, this means that both nodes $A_1$ and $A_2$ promise to deliver data
in less than 10ms. This is not what we need; a recipient is still unspecified.
Clearly what we want is to define promises on a different set of worlds: the
set of possible links or {\em edges} between nodes. There are $N(N-1)$ such directed links.
Thus, we may write:
\beq
\boxx p, ~~~ v(p,A_1\rightarrow A_2)=T.
\eeq
This is now a unique one-way assertion about a promise from one agent
to another. A promise becomes a tuple $\langle \tau,p,\ell\rangle$,
where $\tau$ is a theme or promise-type (e.g. Web service), $p$ is a
proposition (e.g.deliver data in less than 10ms) about how behaviour
is to be constrained, and $\ell$ is a link or edge over which the
promise is to be kept. All policies can be written this way, by
inventing fictitious services.  Also, since every autonomous promise will
have this form, the modal/semantic content is trivial and a simplified
notation could be used.

\subsection{Regional or collective promises from Kripke semantics?}

Kripke structures suggest ways of defining regions over which promises
might be consistently defined, and hence a way of making uniform policies.
For example, a way of unifying two agents $A_1$, $A_2$  with a
common policy, would be for them both to make the same promise to a
third party $A_3$:
\beq
\boxx p, ~~~ v(p,\{ A_1\rightarrow A_3,A_2\rightarrow A_3 \})=T.
\eeq

However, there is a fundamental flaw in this thinking.  The existence
of such a function that unifies links, originating from more than a
single agent-node, is contrary to the fundamental assumption of
autonomy. There is no authority in this picture that has the ability
to assert this uniformity of policy.  Thus, while it might occur by
fortuitous coincidence that $p$ is true over a collection of links, we
are not permitted to {\em specify} it or demand it.  Each source-node
has to make up its own mind. The logic verifies, but it is not a tool
for understanding construction.

What is required is a rule-based construction that allows independent agents to come
together and form structures that span several nodes, by {\em
voluntary cooperation}.  Such an agreement has to be made between
every pair of nodes involved in the cooperative structure. We summarize this
with the following:

\begin{figure}[ht]
\begin{center}
\includegraphics[width=5cm]{figs/coop2}
%\psfig{file=coop2.eps,width=9cm}
\caption{\small (Left) Cooperation and the use of third parties to measure the equivalence of agent-nodes in a region. Agents form groups and roles by agreeing to cooperate about policy.  (Right) This is how the
overlapping file-in-directory rule problem appears in terms of promises to
an external agent. An explicit broken promise is asserted by file,
in spite of agreements to form a cooperative structure.\label{pmm}}
\end{center}
\end{figure}
\begin{assume}[Cooperative promise rule]
For two agents to guarantee the same promise, one
requires a special type of promise: the promise to cooperate with
neighbouring agent-nodes, about basic promise themes.
\end{assume}
A complete structure looks like this:
\begin{itemize}
\item $A_1$ promises $p$ to $A_3$.
\item $A_2$ promises $A_1$ to collaborate about $p$ (denote this as a promise $C(p)$).
\item $A_1$ promises $A_2$ to collaborate about $p$ (denote this as a promise $C(p)$).
\item $A_2$ promises $p$ to $A_3$
\end{itemize}
By measuring $p$ from both $A_1$ and $A_2$, $A_3$ acts as a judge of
their compliance with the mutual agreements between them (see
fig. \ref{pmm}). This allows the basis of a theory of measurement, by third
party monitors, in
collaborative networks. It also shows how to properly define
structures in the file-directory example (see fig \ref{pmm}).

\subsection{Example: dependencies and handshakes}

Even networks of autonomous agents have to collaborate and delegate
tasks, depending on one another to fulfill promised services. An
important matter is to either find a way of expressing dependency
relationships without violating the primary assumption of autonomy, or
prove that it cannot be done\endnote{In nearly all cases agents are working
without irresistable forces guiding them, so the prospect of not being
able to express cooperative tasks as voluntary autonomous cooperation has
to be regarded as contrived to begin with. We claim it is intuitively obvious'
that anything that can be achieve by force can also be achieved willingly.}.

Consider three agents $A_1,A_2,A_3$, a database server, a web server and
a client. We imagine that the client obtains a web service from the web server,
which, in turn, gets its data from a database.
Define propositions and validities:
\begin{itemize}
\item $p_1 =$ will send database data in less than 5ms'', $v(p_1,A_1\rightarrow A_2)=T$.
\item $p_2 =$ will send web data in less than 10 ms'', $v(p_2,A_2\rightarrow A_3)=T$.
\end{itemize}
These two promises might, at first, appear to define a collaboration between the
two servers to provide a promise of service to the client, but they do not.

The promise to serve data from $A_1 \rightarrow A_2$ is in no way connected to the
promise to deliver data from $A_2\rightarrow A_3$:
\begin{itemize}
\item $A_2$ has no obligation to use the data promised by $A_1$.
\item $A_2$ promises its web service regardless of what $A_1$ promises.
\item Neither $A_1$ nor $A_3$ can force $A_2$ to act as a conduit for database and client.
\end{itemize}

We have already established that it would not help to extend the
validity function to try to group the three nodes into a Kripke
world'. Rather, what is needed is a structure that
complete the backwards promises to {\em utilize} promised
services -- promises that completes a {\em handshake} between the
autonomous agents. We require:
\begin{itemize}
\item A promise to uphold $p_1$ from $A_1\rightarrow A_2$.
\item An acceptance promise, to use the promised data from $A_2\rightarrow A_1$.
\item A conditional promise from $A_2\rightarrow A_3$ to uphold $p_2$ iff $p_1$ is both present and accepted.
\end{itemize}

\subsection{Acceptance}

We have deduced that three components are required to make a dependent promise.
This requirement cannot be derived logically; rather, we must
specify it as part of the semantics of autonomy.

\begin{axiom}[Acceptance or utilization promise rule]
Autonomy requires an agent to explicitly accept a
promise that has been made, when it will be used to derive a dependent promise.
\end{axiom}
We thus identify a second special type of promise: the usage'' or acceptance'' promise
that we discuss in the next chapter. This is a promise to receive so it falls into the
class of promise bodies denoted $-b$.

What use is this construction? First, it advances the manifesto of
atomicity, making making all policy decisions explicit.  The
construction has two implications:
\begin{enumerate}
\item The component atoms (promises) are all visible, so
the inconsistencies of a larger policy
can be determined by the presence or absence of a specific link
in the labelled graph of promises, according to the rules.
\item One can provide basic recipes (handshakes etc.) for building concensus and
agent societies'', without hiding assumptions. This is
important in pervasive computing, where agents truly are politically
autonomous and every promise must be explicit.
\end{enumerate}
The one issue that we have not discussed is the question of how
cooperative agreements are arrived at. This is a question that has
been discussed in the context of cooperative game
theory\cite{sirisane,axelrod1}, and will be elaborated on in a future
paper\cite{siri2}. Once again, it has to do with the human aspect of
collaboration. The reader can excerise imagination in introducing
fictitious, intermediate agents to deal with issues such as shared
memory and resources.

\subsection{Temporal behaviour}

As an addendum to this discussion, consider {\em temporal logic}: this
is a branch of modal logic, in which an agent evolves from one Kripke
world into another, according to a causal sequence, which normally
represents time.  In temporal logic, each new time-step is a new
Kripke world, and the truth or falsity of propositions can span
sequences of worlds, forming graph-like structures.  Although time is
not important in {\em declaring} policy, it is worth asking whether a logic
based on a graph of worlds could be used to discuss the collaborative
aspects of policy.  Indeed, some authors have proposed using temporal
logic and derivative formalisms to discuss the behaviour of policy, and
modelling the evolution systems in interesting ways\cite{bandara1,bandara2,lafuente1}.

The basic objection to thinking in these terms is, once again,
autonomy.  In temporal logic, one must basically know the way in which
the propositions will evolve with time, i.e. across the entire ordered
graph. That presupposes that such a structure can be written down by
an authority for the every world; it supposes the existence of a global
evolution operator, or master plan for the agents in a network.
No such structure exists, {\em a priori}. It remains an open question
whether causality is relevant to policy specification.

\subsection{Interlopers: transference of responsibility}\label{interxx}

One of the difficult problems of policy consistency is in
transferring responsibilities from one agent to another: when an agent acts
as as a conduit or interloper for another. Consider agents $a$, $b$ and $c$,
and suppose that $b$ has a resource $B$ which it can promise to others.
How might $b$ express to $a$: You may have access to $B$, but do not pass it on
to $c$''?

\begin{figure}[ht]
\includegraphics[width=12cm]{figs/interloper}
%\psfig{file=interloper.eps,width=8cm}
\caption{\small Transference of responsibility.\label{interloper}}
\end{figure}

The difficulty in this promise is that the promise itself refers to a third party,
and this mixes link-worlds with constraints.
As a single promise, this desire is not implementable in the proposed scheme:
\begin{itemize}
\item It refers to $B$, which $a$ has no access to, or prior knowledge of.
\item If refers to a potential promise from $a$ to $c$, which is unspecified.
\item It preempts a promise from $a$ to $b$ to never give $B$ along $a\rightarrow c$.
\end{itemize}
There is a straightforward resolution that maintains the autonomy
of the nodes, the principle of separation between nodes and
constraints, and which makes the roles of the three parties
explicit. We note that node $b$ cannot order node $a$ to do
anything. Rather, the agents must set up an agreement about their
wishes. This also reveals that fact that the original promise is
vague and inconsistent, in the first instance, since $b$ never
promises that it will not give $B$ to $c$ itself.
The solution requires a cooperative agreement (see fig. \ref{interloper}).
\begin{itemize}
\item First we must give $a$ access to $B$ by setting up the handshake promises:
i) from $b\rightarrow a$, send B'', ii) from $a\rightarrow b$, accept/use send B''.
\item Then $b$ must make a consistent promise not to send $B$ from $b\rightarrow c$, by promising not $B$'' along this link.
\item Finally, $a$ promises $b$ to cooperate with $b$'s promises about not $B$'', by promising
to cooperate with not $B$'' along $a\rightarrow b$.
This implies the dotted line in the figure that it
will obey an equivalent promise not $B$'' from $a\rightarrow c$, which
\end{itemize}
At first glance, this might seem like a lot of work for express a
simple sentence. The benefit of the construction, however, it that is
preserves the basic principles of make every promise explicit, and
separating agents-nodes from their intentions.  This will be crucial
to avoiding the contradictions and ambiguities of other schemes.

\section{Modal logic and reinterpretation using promises}\label{invol}

The usual formulation of modal logical in terms of necessity is
unnecessarily fuzzy, even to the point of being incorrect. The
language of promises helps to clarify what is going on here.

The problem lies in the semantics of the most basic assumptions about
necessity. The rule that $\boxx p \rightarrow \boxx\boxx p$ is pure
nonsense if $\boxx$ means necessity. It is not at all necessary that
things be necessary. One can choose requirements' which is a
voluntary act about things defined to be necessary. Conversely, one
can mandate a voluntary choice in a multiple choice exam.  Thus the
common language interpretation is simply wrong. However, all is not lost.

Consider a reinterpretation of these quantities as follows:
\begin{center}
\begin{tabular}{ll}
\hline
$\boxx p$ & $V p = \neg \boxx p$\\
\hline\hline
$p$ is involuntary & $p$ is voluntary\\
\end{tabular}
\end{center}
Involuntary acts are made by an irresistable force, so we we are led
to the need to speak of forces that are beyond an agent's
control. However, every one of the following possibilities might be
true in some circumstances:

\begin{itemize}
\item $\boxx \boxx p$: it was forced upon the agent to make $p$ involuntarily true (coercion).
\item $V \boxx p$: the agent chose to make $p$ involuntarily true (discipline).
\item $\boxx V p$: the agent was forced to make $p$  a voluntarily choice (authority).
\item $V V p$: the agent chose to make $p$ a voluntarily choice (decision).
\end{itemize}

To handle this case, one can imagine making a proposition $p$ partially voluntary,
so that it consists of a voluntary (promised) part $p_v$ and an involuntary
part $p_\boxx$:
\beq
p &=& p_v \;\union\; p_\boxx - p_vp_\boxx\\
\boxx p &=& p_\boxx\\
V p &=& p_v
\eeq
Then we can say
\beq
p_v \; \intersect\;  p_\boxx &=& \emptyset\\
V p \; \intersect\; \boxx p &=& \emptyset.
\eeq

\section{Promise graphs}

The formulation of $\mu$-promises in the previous chapter has the
obvious characteristics of a network, or in graph theoretical terms a
so-called {\em directed graph} (a network of arrows). This is not a
particularly novel or unusual construction\endnote{Micropromises bear
a passing resemblance to the theory of capabilities in
ref. \cite{snyder1}}; many phenomena form such networks. However, its
commonality is a powerful identification of its ubiquity, and this
feature of promises will allow us to draw in many important insights
directedness in graphs display the intricacies of {\em causation}: the
ordering of multi-agent phenomena.

When a single agents makes a collection of promises to other agents,
some of these can be simplified or replaced by a single promise.
There can also be cases in which we attribute special meaning
(semantics) to particular combinations of promises, thus we begin
by discussing the basics of composition.

\section{The use promise is not primitive}

The use-promise we have
referred to so far cannot be primitive promise type, since it includes
implicit information about the promise. We can express this by defining:
\beq
U(b) \equiv -\psi(b) \with \Upsilon(b).
\eeq
i.e.
\begin{quotation}
\begin{center}
Use $\equiv$ knowledge of content $\with$ intention to employ content
\end{center}
\end{quotation}
where $\Upsilon(b)$ is the primitive promise to act on an unconditional promise $\pi(b)$
that it has (necessarily) received directly.

\section{Transactions and duality}\label{not_duality}

Each promise graph, classified in terms of $+$ and $-$ promise types
has at least two dual or complementary viewpoints.

\subsection{Complementarity: Intent vs Implementation}

The first concerns the duality between planning and implementation,
or declarative and imperative forms of a plan.
We can see this as in fig. \ref{duality}.

\begin{figure}[ht]
\begin{center}
\includegraphics[width=12cm]{figs/duality}
%\psfig{file=serial.eps,width=4.5cm}
\caption{Alternative interpretations of a service interaction, in terms of
a service or transport.\label{duality}}
\end{center}
\end{figure}

In the service view (a), the service provider takes ultimate responsibilty
by making a promise directly to the end reader, but it is a promise
conditional on the behaviour of the post office whose role is to
deliver the book. The positive aspect of this view is that it reflects
the reality of the trading interaction.  The post office is merely an
assistant (see section \ref{assistance}).  This is a version of
causation in which the original intention is the driver for events.

In the transport view (b), we model this more closely related to the
physical implementation of the promise. The service provider
(bookshop) promises to pass the book to the post office, who in turn
promises the reader to deliver it, assuming that it gets the book in
the first place.  This is a version of causation in which transactions
leading to fulfillment of the promise are in focus.

In our view, the first of these is a more accurate representation of
the scenario, as it provides a deeper explanation for the events that
happen to transpire, and it places the end points of the service delivery in a
direct relationship with one another.

\subsection{Causation and time-reversal symmetry}\label{loops}

Consider fig. \ref{timereversal} in its two incarnations. The
left and right hand versions tell exactly the same story, with
structually identical graphs, yet the $+$ and $-$ signs have
been reversed. The re-interpretation suggests (but does not prove)
that the reversal that takes place is the agent initiating the
relationship. It is often natural to assume that a $+$ promise
must come before a use-promise to make use of it. However, this
shows that no such rule is necessarily the case, as by renaming the
promises, we achieve the opposite result\endnote{Indeed, we
could interpret an initial promise to use a service as signalling a

\begin{figure}[ht]
\begin{center}
\includegraphics[width=12cm]{figs/timereversal}
%\psfig{file=serial.eps,width=4.5cm}
\caption{Alternative interpretations of a service interaction, in who initiates
the transaction.\label{timereversal}}
\end{center}
\end{figure}

This insight is part of a general symmetry in transacations between

The symmetry between $\pm$ promises is a fundamental one, what
physicists would call {\em time reversal symmetry}, or the lack of an
arrow of time', in this case who goes first'. Such is it with all
physical laws and mathematical expressions of change that there is no
implied direction to these causative arrows initially. It is up to the
analyst to break the symmetry by specifying a {\em boundary condition}
(or, in this case, initial condition), which manifestly breaks the
symmetry by deciding a given point at which we have certain knowledge
of the system concerned and in which direction from this milestone the
predictions (promises) of behaviour take us from that point.

We should not assume anything about which agent makes or keeps a
promise first, simply from the structure of the graph. Additional
information is needed to specify the direction of causation.