[PromiseBook] / trunk / chap_logic.tex Repository: Repository Listing PromiseBook

# Annotation of /trunk/chap_logic.tex

 1 : mark 1 2 : mark 21 \chapter{Reasoning about $\mu$-promises}\label{chap_logic} 3 : mark 1 4 : 5 : To secure a clear and consistent model of intent we must use formal 6 : methods, but what formalism do we have to express the necessary 7 : issues? Previous attempts to discuss the consistency of distributed 8 : policy have achieved varying degrees of success, but have ultimately 9 : fallen short of being useful tools except in rather limited arenas. 10 : For example: 11 : 12 : \begin{itemize} 13 : \item Modal logics: these require one to formulate 14 : hypotheses that can be checked as true/false propositions. This is not 15 : the way system administrators work. 16 : 17 : \item The $\pi$-calculus: has attractive features but focuses on issues 18 : that are too low-level for our main purposes. It describes systems in 19 : terms of states and transitions rather than policies (constraints 20 : about states)\cite{parrow1}. 21 : 22 : \end{itemize} 23 : 24 : \section{Promise consistency and scope} 25 : 26 : How can agent promises be inconsistent? Is a conflict of intention 27 : enough to speak of inconsistenct? In fact it is not, because we must 28 : deal with the problem of scope. Two agents might have conflicting 29 : intentions with respect to some imaginary higher purpose, but have no 30 : knowledge of each other or this imaginary purpose. 31 : 32 : For example: A intends to have his son take over his trading business 33 : when he retires. A's son intends to go to college and become a 34 : scientist. A's wife intends to have him marry the daughter of a friend 35 : and work the land to support them. Are any of these intentions 36 : consistent or inconsistent? What imaginary goals does one measure such 37 : consistency against? 38 : 39 : Promises may only be made by a specific agent, so a more careful 40 : question is to ask how can an agent be inconsistent in making its 41 : promises? At the extreme pole where every agent is independent and 42 : sees only see its own world, there would be no need to speak of 43 : inconsistency: unless agents have promised to behave in a similar 44 : fashion, they will do as they please. This is the property of a local 45 : theory. 46 : 47 : \begin{lemma}[Locality or promises of the first kind] 48 : An agent cannot make an elementary promise (of the first kind) about 49 : any agent other than itself, or its own behaviour. 50 : \end{lemma} 51 : 52 : A contradiction can still occur if a single agent promises two 53 : contradictory things. We call such a case incompatible or even broken 54 : promises. When all information is promised by (and hence localized in) a 55 : single agent, there is no need to go beyond that agent to look for 56 : inconsistencies. If a specific agent promises to marry and 57 : not to marry, then there is an obvious conflict. the locality means 58 : we only need to look in one place for this. 59 : 60 : mark 42 \begin{definition}[Inconsistent promises] 61 : mark 39 Let $b_1$ and $b_2$ be promise bodies of the same type, i.e. about the 62 : same issue. 63 : mark 1 A promise of 64 : $b_1$ from agent $A_1$ to agent $A_2$ is said to be inconsistent if there 65 : exists another promise from $A_1$ to $A_2$, of $b_2$, in which 66 : mark 39 $b_1\not=b_2$, i.e. if $b_i = (\tau,\chi_i)$, then $\chi_1\not=\chi_2$. 67 : mark 1 \end{definition} 68 : In the worst case, one could consider promising inconsistent promises 69 : to be a case of breaking both promises. 70 : This definition is very simple, and becomes most powerful when one 71 : identifies promise {\em types}. It 72 : implies that an agent can only break its own promises: if an agent 73 : promises two different things, it has broken both of its promises. 74 : 75 : 76 : \section{Indirection - a problem for deceptions} 77 : 78 : We have said earlier that the promise body should not normally contain 79 : references to agents. When agents' names are absent from the promise 80 : body, the logic is very simple. This is a desirable property. 81 : 82 : But there are promises that cannot be expressed like this. 83 : Some complicated promises can be made (like the transference of 84 : responsibility example in \cite{burgessdsom2005}). However there 85 : are cases where we cannot avoid referring to third parties. 86 : e.g. 87 : \begin{itemize} 88 : \item I promise I love you. 89 : \item I promise I do not love Mary. 90 : \item I promise Mary I do not love you. 91 : \end{itemize} 92 : Or I promise to you not to promise X to Y'', etc. 93 : These levels of complexity must be built up step by step. 94 : 95 : This problem cannot be avoided if we keep to the rule that the promise 96 : body may not refer to other agents, since one could reframe the promises 97 : to avoid mentioning by name and providing the data about whom I love'' 98 : as a service. 99 : 100 : This matter can be resolved as incompatible promises. 101 : We need some kind of taxonomy of promises that are related and incompatible. 102 : Promises need to be classified into some kind of taxonomy/ontology 103 : in order to know when certain promises are incompatible. e.g. 104 : 105 : I promise I love you'' 106 : 107 : I promise Mary I don't love you'' 108 : 109 : These are the same promise type. There is no impediment to promising this, but 110 : the result is, of course, necessarily a lie. 111 : 112 : 113 : \subsection{Di-graphs} 114 : 115 : In our definition of lies, we made an unwarranted assumption. 116 : There is no probelm with the following: 117 : \beq 118 : \begin{array}{c} 119 : A_1 \promise{\pi} A_2\\ 120 : A_1 \promise{\neg \pi} A_3 121 : \end{array} 122 : \eeq 123 : $A_1$ can promise constadictory things to different agents without 124 : making any contradiction, provided the promise body does not refer 125 : to agents. 126 : 127 : \subsection{Tri-graphs} 128 : 129 : This is a lie to either $A_2$ or $A_3$, i.e. the assertion of two or 130 : more inconsistent promises to a promisee, about the behaviour of 131 : promising agent towards either the promisee or towards a third-party. 132 : \beq 133 : \begin{array}{c} 134 : A_1 \promise{\rm Love~ you} A_2\\ 135 : A_1 \promise{\rm Don't~ love~ A_2} A_3 136 : \end{array} 137 : \eeq 138 : 139 : 140 : \section{Promise analysis} 141 : 142 : 143 : Logic is a way of analysing the consistency of assumptions. It is 144 : based on the truth or falsity of collections of propositions 145 : $p_1,p_2,\ldots$. One must formulate these propositions in advance 146 : and then use a set of assumptions to determine their status. 147 : The advantage of logic is that is admits the concept of a proof. 148 : 149 : Is there a logic that is suitable for analyzing promises? Modal logic 150 : has been considered as one possibility, and some authors have made 151 : progress in using modal logics in restricted 152 : models\cite{ortalo1,glasgow1}. However, there are basic problems with 153 : modal logics that limit their usefulness\cite{lupu2}. 154 : 155 : More pragmatically, logic alone does not usually get us far in 156 : engineering. We do not usually want to say things like it is true 157 : that $1+1 = 2$''? Rather we want a system, giving true answers, which 158 : allows us to compute the value of $1+1$, because we do not know it in 159 : advance. 160 : Ultimately we would like such a calculational framework for combining 161 : the effects of multiple promises. Nevertheless, let us set aside such 162 : practical considerations for now, and consider the limitations of modal 163 : logical formalism in the 164 : presence of autonomy. 165 : 166 : 167 : \subsection{Modal Logic and Kripke Semantics} 168 : 169 : Why have formalisms for finding inconsistent policies proven to be so 170 : difficult? A clue to what is going wrong lies in the many worlds 171 : interpretation of the modal logics\cite{modallogic}. 172 : In the modal logics, one makes propositions $p,q$ etc., which are 173 : either true or false, under certain interpretations. One then introduces 174 : modal operators that ascribe certain properties to those 175 : propositions, and one seeks a consistent language of such strings. 176 : 177 : Modal operators are written in a variety of notations, most often with 178 : $\boxx$ or $\diamond$. Thus one can say $\boxx p$, meaning it is 179 : necessary that $p$ be true'', and variations on this theme: 180 : 181 : \begin{center} 182 : \begin{tabular}{ll} 183 : \hline 184 : $\boxx p$ & $\diamond p = \neg \boxx \neg p$\\ 185 : \hline\hline 186 : It is necessary that $p$ & It is possible that $p$\\ 187 : It is obligatory that $p$ & It is allowed that $p$\\ 188 : It is always true that $p$ & It sometimes true that $p$\\ 189 : \end{tabular} 190 : \end{center} 191 : A system in which one classifies propositions into obligatory'', 192 : allowed'' and forbidden'' could easily seem to be a way to codify 193 : policy, and this notion has been 194 : explored\cite{ortalo1,glasgow1,deontic,lupu2}. 195 : 196 : Well known difficulties in interpreting modal logics are dealt with using 197 : Kripke semantics\cite{kripke1}. 198 : Kripke introduced a validity function' $v(p,w)\in\{T,F\}$, in which 199 : a proposition $p$ is classified as being either true of false in a specific 200 : world' $w$. Worlds are usually collections of observers or agents in a system. 201 : 202 : Consider the formulation of a logic of promises, starting with the 203 : idea of a promise' operator. 204 : \begin{itemize} 205 : \item $\boxx p =$ it is promised that $p$ be true. 206 : \item $\diamond p = \neg\boxx\neg p =$ it is unspecified whether $p$ is true. 207 : \item $\boxx \neg p =$ it is promised that $p$ will not be true. 208 : \end{itemize} 209 : and a validity function $v(\cdot,\cdot)$. 210 : 211 : \subsection{Further problems with modal logic} 212 : 213 : Some modal logics begin the with assumption of idempotence of the core 214 : operators. For example, one assumes that $\boxx\rightarrow 215 : \boxx p \boxx p$. Many justifications for this have been attempted; indeed, 216 : the left hand side can be phrased is several ways in English: 217 : 218 : \begin{itemize} 219 : \item It is necessary that $p$ be necessary. 220 : \item $p$ is necessarily a necessity. 221 : \item It is required that $p$ be necessary. 222 : \item $p$ must be necessary. 223 : \item $p$ must be a requirement. 224 : \end{itemize} 225 : In each of these cases, the truth of this relation (the necessity of 226 : necessity) seems to be simply false in a world of autonomous promises, 227 : thus we reject this form of reasoning, autonomously and voluntarily 228 : (see section \ref{invol}). 229 : 230 : \subsection{Single promises} 231 : 232 : A promise is usually shared between a sender and a recipient. It is 233 : not a property of agents, as in usual modal logics, but of a pair of 234 : agents. However, a promise has implications only directly on the behaviour of the 235 : promiser, not the promisee. 236 : 237 : Consider the example of the Service Level Agreement, above, and let 238 : $p$ mean Will provide data in less than 10ms''. How shall we express 239 : the idea that a node $A_1$ promises a node $A_2$ this proposition? 240 : Consider the following statement: 241 : \beq 242 : \boxx p, ~~~v(p,A_1) = T. 243 : \eeq 244 : This means that it is true that $p$ is promised at node $A_1$, i.e. node 1 245 : promises to provide data in less than 10ms -- but to whom? Clearly, we must also 246 : provide a recipient. Suppose, we try to include the recipient in the same world 247 : as the sender? i.e. 248 : \beq 249 : \boxx p, ~~~v(p,\{A_1,A_2\}) = T. 250 : \eeq 251 : However, this means that both nodes $A_1$ and $A_2$ promise to deliver data 252 : in less than 10ms. This is not what we need; a recipient is still unspecified. 253 : Clearly what we want is to define promises on a different set of worlds: the 254 : set of possible links or {\em edges} between nodes. There are $N(N-1)$ such directed links. 255 : Thus, we may write: 256 : \beq 257 : \boxx p, ~~~ v(p,A_1\rightarrow A_2)=T. 258 : \eeq 259 : This is now a unique one-way assertion about a promise from one agent 260 : to another. A promise becomes a tuple $\langle \tau,p,\ell\rangle$, 261 : where $\tau$ is a theme or promise-type (e.g. Web service), $p$ is a 262 : proposition (e.g.deliver data in less than 10ms) about how behaviour 263 : is to be constrained, and $\ell$ is a link or edge over which the 264 : promise is to be kept. All policies can be written this way, by 265 : inventing fictitious services. Also, since every autonomous promise will 266 : have this form, the modal/semantic content is trivial and a simplified 267 : notation could be used. 268 : 269 : \subsection{Regional or collective promises from Kripke semantics?} 270 : 271 : Kripke structures suggest ways of defining regions over which promises 272 : might be consistently defined, and hence a way of making uniform policies. 273 : For example, a way of unifying two agents $A_1$, $A_2$ with a 274 : common policy, would be for them both to make the same promise to a 275 : third party $A_3$: 276 : \beq 277 : \boxx p, ~~~ v(p,\{ A_1\rightarrow A_3,A_2\rightarrow A_3 \})=T. 278 : \eeq 279 : 280 : However, there is a fundamental flaw in this thinking. The existence 281 : of such a function that unifies links, originating from more than a 282 : single agent-node, is contrary to the fundamental assumption of 283 : autonomy. There is no authority in this picture that has the ability 284 : to assert this uniformity of policy. Thus, while it might occur by 285 : fortuitous coincidence that $p$ is true over a collection of links, we 286 : are not permitted to {\em specify} it or demand it. Each source-node 287 : has to make up its own mind. The logic verifies, but it is not a tool 288 : for understanding construction. 289 : 290 : What is required is a rule-based construction that allows independent agents to come 291 : together and form structures that span several nodes, by {\em 292 : voluntary cooperation}. Such an agreement has to be made between 293 : every pair of nodes involved in the cooperative structure. We summarize this 294 : with the following: 295 : 296 : \begin{figure}[ht] 297 : \begin{center} 298 : \includegraphics[width=5cm]{figs/coop2} 299 : %\psfig{file=coop2.eps,width=9cm} 300 : \caption{\small (Left) Cooperation and the use of third parties to measure the equivalence of agent-nodes in a region. Agents form groups and roles by agreeing to cooperate about policy. (Right) This is how the 301 : overlapping file-in-directory rule problem appears in terms of promises to 302 : an external agent. An explicit broken promise is asserted by file, 303 : in spite of agreements to form a cooperative structure.\label{pmm}} 304 : \end{center} 305 : \end{figure} 306 : \begin{assume}[Cooperative promise rule] 307 : For two agents to guarantee the same promise, one 308 : requires a special type of promise: the promise to cooperate with 309 : neighbouring agent-nodes, about basic promise themes. 310 : \end{assume} 311 : A complete structure looks like this: 312 : \begin{itemize} 313 : \item $A_1$ promises $p$ to $A_3$. 314 : \item $A_2$ promises $A_1$ to collaborate about $p$ (denote this as a promise $C(p)$). 315 : \item $A_1$ promises $A_2$ to collaborate about $p$ (denote this as a promise $C(p)$). 316 : \item $A_2$ promises $p$ to $A_3$ 317 : \end{itemize} 318 : By measuring $p$ from both $A_1$ and $A_2$, $A_3$ acts as a judge of 319 : their compliance with the mutual agreements between them (see 320 : fig. \ref{pmm}). This allows the basis of a theory of measurement, by third 321 : party monitors, in 322 : collaborative networks. It also shows how to properly define 323 : structures in the file-directory example (see fig \ref{pmm}). 324 : 325 : 326 : \subsection{Example: dependencies and handshakes} 327 : 328 : Even networks of autonomous agents have to collaborate and delegate 329 : tasks, depending on one another to fulfill promised services. An 330 : important matter is to either find a way of expressing dependency 331 : relationships without violating the primary assumption of autonomy, or 332 : prove that it cannot be done\endnote{In nearly all cases agents are working 333 : without irresistable forces guiding them, so the prospect of not being 334 : able to express cooperative tasks as voluntary autonomous cooperation has 335 : to be regarded as contrived to begin with. We claim it is intuitively obvious' 336 : that anything that can be achieve by force can also be achieved willingly.}. 337 : 338 : Consider three agents $A_1,A_2,A_3$, a database server, a web server and 339 : a client. We imagine that the client obtains a web service from the web server, 340 : which, in turn, gets its data from a database. 341 : Define propositions and validities: 342 : \begin{itemize} 343 : \item $p_1 =$ will send database data in less than 5ms'', $v(p_1,A_1\rightarrow A_2)=T$. 344 : \item $p_2 =$ will send web data in less than 10 ms'', $v(p_2,A_2\rightarrow A_3)=T$. 345 : \end{itemize} 346 : These two promises might, at first, appear to define a collaboration between the 347 : two servers to provide a promise of service to the client, but they do not. 348 : 349 : The promise to serve data from $A_1 \rightarrow A_2$ is in no way connected to the 350 : promise to deliver data from $A_2\rightarrow A_3$: 351 : \begin{itemize} 352 : \item $A_2$ has no obligation to use the data promised by $A_1$. 353 : \item $A_2$ promises its web service regardless of what $A_1$ promises. 354 : \item Neither $A_1$ nor $A_3$ can force $A_2$ to act as a conduit for database and client. 355 : \end{itemize} 356 : 357 : We have already established that it would not help to extend the 358 : validity function to try to group the three nodes into a Kripke 359 : world'. Rather, what is needed is a structure that 360 : complete the backwards promises to {\em utilize} promised 361 : services -- promises that completes a {\em handshake} between the 362 : autonomous agents. We require: 363 : \begin{itemize} 364 : \item A promise to uphold $p_1$ from $A_1\rightarrow A_2$. 365 : \item An acceptance promise, to use the promised data from $A_2\rightarrow A_1$. 366 : \item A conditional promise from $A_2\rightarrow A_3$ to uphold $p_2$ iff $p_1$ is both present and accepted. 367 : \end{itemize} 368 : 369 : \subsection{Acceptance} 370 : 371 : 372 : We have deduced that three components are required to make a dependent promise. 373 : This requirement cannot be derived logically; rather, we must 374 : specify it as part of the semantics of autonomy. 375 : 376 : \begin{axiom}[Acceptance or utilization promise rule] 377 : Autonomy requires an agent to explicitly accept a 378 : promise that has been made, when it will be used to derive a dependent promise. 379 : \end{axiom} 380 : We thus identify a second special type of promise: the usage'' or acceptance'' promise 381 : that we discuss in the next chapter. This is a promise to receive so it falls into the 382 : class of promise bodies denoted $-b$. 383 : 384 : What use is this construction? First, it advances the manifesto of 385 : atomicity, making making all policy decisions explicit. The 386 : construction has two implications: 387 : \begin{enumerate} 388 : \item The component atoms (promises) are all visible, so 389 : the inconsistencies of a larger policy 390 : can be determined by the presence or absence of a specific link 391 : in the labelled graph of promises, according to the rules. 392 : \item One can provide basic recipes (handshakes etc.) for building concensus and 393 : agent societies'', without hiding assumptions. This is 394 : important in pervasive computing, where agents truly are politically 395 : autonomous and every promise must be explicit. 396 : \end{enumerate} 397 : The one issue that we have not discussed is the question of how 398 : cooperative agreements are arrived at. This is a question that has 399 : been discussed in the context of cooperative game 400 : theory\cite{sirisane,axelrod1}, and will be elaborated on in a future 401 : paper\cite{siri2}. Once again, it has to do with the human aspect of 402 : collaboration. The reader can excerise imagination in introducing 403 : fictitious, intermediate agents to deal with issues such as shared 404 : memory and resources. 405 : 406 : 407 : \subsection{Temporal behaviour} 408 : 409 : As an addendum to this discussion, consider {\em temporal logic}: this 410 : is a branch of modal logic, in which an agent evolves from one Kripke 411 : world into another, according to a causal sequence, which normally 412 : represents time. In temporal logic, each new time-step is a new 413 : Kripke world, and the truth or falsity of propositions can span 414 : sequences of worlds, forming graph-like structures. Although time is 415 : not important in {\em declaring} policy, it is worth asking whether a logic 416 : based on a graph of worlds could be used to discuss the collaborative 417 : aspects of policy. Indeed, some authors have proposed using temporal 418 : logic and derivative formalisms to discuss the behaviour of policy, and 419 : modelling the evolution systems in interesting ways\cite{bandara1,bandara2,lafuente1}. 420 : 421 : The basic objection to thinking in these terms is, once again, 422 : autonomy. In temporal logic, one must basically know the way in which 423 : the propositions will evolve with time, i.e. across the entire ordered 424 : graph. That presupposes that such a structure can be written down by 425 : an authority for the every world; it supposes the existence of a global 426 : evolution operator, or master plan for the agents in a network. 427 : No such structure exists, {\em a priori}. It remains an open question 428 : whether causality is relevant to policy specification. 429 : 430 : 431 : 432 : \subsection{Interlopers: transference of responsibility}\label{interxx} 433 : 434 : 435 : One of the difficult problems of policy consistency is in 436 : transferring responsibilities from one agent to another: when an agent acts 437 : as as a conduit or interloper for another. Consider agents $a$, $b$ and $c$, 438 : and suppose that $b$ has a resource $B$ which it can promise to others. 439 : How might $b$ express to $a$: You may have access to $B$, but do not pass it on 440 : to $c$''? 441 : 442 : \begin{figure}[ht] 443 : \includegraphics[width=12cm]{figs/interloper} 444 : %\psfig{file=interloper.eps,width=8cm} 445 : \caption{\small Transference of responsibility.\label{interloper}} 446 : \end{figure} 447 : 448 : The difficulty in this promise is that the promise itself refers to a third party, 449 : and this mixes link-worlds with constraints. 450 : As a single promise, this desire is not implementable in the proposed scheme: 451 : \begin{itemize} 452 : \item It refers to $B$, which $a$ has no access to, or prior knowledge of. 453 : \item If refers to a potential promise from $a$ to $c$, which is unspecified. 454 : \item It preempts a promise from $a$ to $b$ to never give $B$ along $a\rightarrow c$. 455 : \end{itemize} 456 : There is a straightforward resolution that maintains the autonomy 457 : of the nodes, the principle of separation between nodes and 458 : constraints, and which makes the roles of the three parties 459 : explicit. We note that node $b$ cannot order node $a$ to do 460 : anything. Rather, the agents must set up an agreement about their 461 : wishes. This also reveals that fact that the original promise is 462 : vague and inconsistent, in the first instance, since $b$ never 463 : promises that it will not give $B$ to $c$ itself. 464 : The solution requires a cooperative agreement (see fig. \ref{interloper}). 465 : \begin{itemize} 466 : \item First we must give $a$ access to $B$ by setting up the handshake promises: 467 : i) from $b\rightarrow a$, send B'', ii) from $a\rightarrow b$, accept/use send B''. 468 : \item Then $b$ must make a consistent promise not to send $B$ from $b\rightarrow c$, by promising not $B$'' along this link. 469 : \item Finally, $a$ promises $b$ to cooperate with $b$'s promises about not $B$'', by promising 470 : to cooperate with not $B$'' along $a\rightarrow b$. 471 : This implies the dotted line in the figure that it 472 : will obey an equivalent promise not $B$'' from $a\rightarrow c$, which 473 : could also be made explicit. 474 : \end{itemize} 475 : At first glance, this might seem like a lot of work for express a 476 : simple sentence. The benefit of the construction, however, it that is 477 : preserves the basic principles of make every promise explicit, and 478 : separating agents-nodes from their intentions. This will be crucial 479 : to avoiding the contradictions and ambiguities of other schemes. 480 : 481 : 482 : \section{Modal logic and reinterpretation using promises}\label{invol} 483 : 484 : The usual formulation of modal logical in terms of necessity is 485 : unnecessarily fuzzy, even to the point of being incorrect. The 486 : language of promises helps to clarify what is going on here. 487 : 488 : The problem lies in the semantics of the most basic assumptions about 489 : necessity. The rule that $\boxx p \rightarrow \boxx\boxx p$ is pure 490 : nonsense if $\boxx$ means necessity. It is not at all necessary that 491 : things be necessary. One can choose requirements' which is a 492 : voluntary act about things defined to be necessary. Conversely, one 493 : can mandate a voluntary choice in a multiple choice exam. Thus the 494 : common language interpretation is simply wrong. However, all is not lost. 495 : 496 : Consider a reinterpretation of these quantities as follows: 497 : \begin{center} 498 : \begin{tabular}{ll} 499 : \hline 500 : $\boxx p$ & $V p = \neg \boxx p$\\ 501 : \hline\hline 502 : $p$ is involuntary & $p$ is voluntary\\ 503 : \end{tabular} 504 : \end{center} 505 : Involuntary acts are made by an irresistable force, so we we are led 506 : to the need to speak of forces that are beyond an agent's 507 : control. However, every one of the following possibilities might be 508 : true in some circumstances: 509 : 510 : \begin{itemize} 511 : \item $\boxx \boxx p$: it was forced upon the agent to make $p$ involuntarily true (coercion). 512 : \item $V \boxx p$: the agent chose to make $p$ involuntarily true (discipline). 513 : \item $\boxx V p$: the agent was forced to make $p$ a voluntarily choice (authority). 514 : \item $V V p$: the agent chose to make $p$ a voluntarily choice (decision). 515 : \end{itemize} 516 : 517 : To handle this case, one can imagine making a proposition $p$ partially voluntary, 518 : so that it consists of a voluntary (promised) part $p_v$ and an involuntary 519 : part $p_\boxx$: 520 : \beq 521 : p &=& p_v \;\union\; p_\boxx - p_vp_\boxx\\ 522 : \boxx p &=& p_\boxx\\ 523 : V p &=& p_v 524 : \eeq 525 : Then we can say 526 : \beq 527 : p_v \; \intersect\; p_\boxx &=& \emptyset\\ 528 : V p \; \intersect\; \boxx p &=& \emptyset. 529 : \eeq 530 : 531 : 532 : \section{Promise graphs} 533 : 534 : The formulation of $\mu$-promises in the previous chapter has the 535 : obvious characteristics of a network, or in graph theoretical terms a 536 : so-called {\em directed graph} (a network of arrows). This is not a 537 : particularly novel or unusual construction\endnote{Micropromises bear 538 : a passing resemblance to the theory of capabilities in 539 : ref. \cite{snyder1}}; many phenomena form such networks. However, its 540 : commonality is a powerful identification of its ubiquity, and this 541 : feature of promises will allow us to draw in many important insights 542 : that have been made about networks in later chapters. For example, 543 : directedness in graphs display the intricacies of {\em causation}: the 544 : ordering of multi-agent phenomena. 545 : 546 : When a single agents makes a collection of promises to other agents, 547 : some of these can be simplified or replaced by a single promise. 548 : There can also be cases in which we attribute special meaning 549 : (semantics) to particular combinations of promises, thus we begin 550 : by discussing the basics of composition. 551 : 552 : 553 : \section{The use promise is not primitive} 554 : 555 : The use-promise we have 556 : referred to so far cannot be primitive promise type, since it includes 557 : implicit information about the promise. We can express this by defining: 558 : \beq 559 : U(b) \equiv -\psi(b) \with \Upsilon(b). 560 : \eeq 561 : i.e. 562 : \begin{quotation} 563 : \begin{center} 564 : Use $\equiv$ knowledge of content $\with$ intention to employ content 565 : \end{center} 566 : \end{quotation} 567 : where $\Upsilon(b)$ is the primitive promise to act on an unconditional promise $\pi(b)$ 568 : that it has (necessarily) received directly. 569 : 570 : 571 : 572 : 573 : \section{Transactions and duality}\label{not_duality} 574 : 575 : Each promise graph, classified in terms of $+$ and $-$ promise types 576 : has at least two dual or complementary viewpoints. 577 : 578 : \subsection{Complementarity: Intent vs Implementation} 579 : 580 : The first concerns the duality between planning and implementation, 581 : or declarative and imperative forms of a plan. 582 : We can see this as in fig. \ref{duality}. 583 : 584 : \begin{figure}[ht] 585 : \begin{center} 586 : \includegraphics[width=12cm]{figs/duality} 587 : %\psfig{file=serial.eps,width=4.5cm} 588 : \caption{Alternative interpretations of a service interaction, in terms of 589 : a service or transport.\label{duality}} 590 : \end{center} 591 : \end{figure} 592 : 593 : In the service view (a), the service provider takes ultimate responsibilty 594 : by making a promise directly to the end reader, but it is a promise 595 : conditional on the behaviour of the post office whose role is to 596 : deliver the book. The positive aspect of this view is that it reflects 597 : the reality of the trading interaction. The post office is merely an 598 : assistant (see section \ref{assistance}). This is a version of 599 : causation in which the original intention is the driver for events. 600 : 601 : In the transport view (b), we model this more closely related to the 602 : physical implementation of the promise. The service provider 603 : (bookshop) promises to pass the book to the post office, who in turn 604 : promises the reader to deliver it, assuming that it gets the book in 605 : the first place. This is a version of causation in which transactions 606 : leading to fulfillment of the promise are in focus. 607 : 608 : In our view, the first of these is a more accurate representation of 609 : the scenario, as it provides a deeper explanation for the events that 610 : happen to transpire, and it places the end points of the service delivery in a 611 : direct relationship with one another. 612 : 613 : 614 : \subsection{Causation and time-reversal symmetry}\label{loops} 615 : 616 : Consider fig. \ref{timereversal} in its two incarnations. The 617 : left and right hand versions tell exactly the same story, with 618 : structually identical graphs, yet the $+$ and $-$ signs have 619 : been reversed. The re-interpretation suggests (but does not prove) 620 : that the reversal that takes place is the agent initiating the 621 : relationship. It is often natural to assume that a $+$ promise 622 : must come before a use-promise to make use of it. However, this 623 : shows that no such rule is necessarily the case, as by renaming the 624 : promises, we achieve the opposite result\endnote{Indeed, we 625 : could interpret an initial promise to use a service as signalling a 626 : request for a service (like placing an advertisement).}. 627 : 628 : \begin{figure}[ht] 629 : \begin{center} 630 : \includegraphics[width=12cm]{figs/timereversal} 631 : %\psfig{file=serial.eps,width=4.5cm} 632 : \caption{Alternative interpretations of a service interaction, in who initiates 633 : the transaction.\label{timereversal}} 634 : \end{center} 635 : \end{figure} 636 : 637 : This insight is part of a general symmetry in transacations between 638 : giver and receiver. 639 : 640 : The symmetry between $\pm$ promises is a fundamental one, what 641 : physicists would call {\em time reversal symmetry}, or the lack of an 642 : arrow of time', in this case who goes first'. Such is it with all 643 : physical laws and mathematical expressions of change that there is no 644 : implied direction to these causative arrows initially. It is up to the 645 : analyst to break the symmetry by specifying a {\em boundary condition} 646 : (or, in this case, initial condition), which manifestly breaks the 647 : symmetry by deciding a given point at which we have certain knowledge 648 : of the system concerned and in which direction from this milestone the 649 : predictions (promises) of behaviour take us from that point. 650 : 651 : We should not assume anything about which agent makes or keeps a 652 : promise first, simply from the structure of the graph. Additional 653 : information is needed to specify the direction of causation. 654 : 655 : 656 : 657 : 658 : 659 :