[PromiseBook] / trunk / chap_trust.tex Repository: Repository Listing PromiseBook

Annotation of /trunk/chap_trust.tex

 1 : mark 1 2 : 3 : \chapter{The role of trust} 4 : 5 : \begin{quote} 6 : {\em I don't trust him. We're friends.}\\ ~~~~~~ --Bertolt Brecht 7 : \end{quote} 8 : 9 : The decision to trust someone is a policy decision. Although the 10 : decision can be made {\em ad hoc}, our common understanding of trust 11 : is that it is based on a gathering of experience, i.e. a process of 12 : learning about the behaviour and reputation of someone in a variety of 13 : scenarios. Our particular policy might weight certain sources and 14 : behaviours more heavily than others and no one can tell us what is the 15 : right thing to do. Hence trust is intimately connected with personal 16 : autonomy. 17 : 18 : In this chapter, we wish to define trust in the spirit of this personal 19 : autonomy, by basing it directly on the concept of how reliably a 20 : promise is kept. A promise is also an autonomously made declaration of 21 : behaviour, that is highly individual, moreover it carries with it 22 : the notion of a theme (what the promise is about)\cite{promiseidea}. By combining 23 : promises with reliability, we thus have a natural definition of trust 24 : that satisfies well-understood rules for revising both the logical 25 : aspects of policy and the statistical observations made about agents' 26 : behaviours. We show that this viewpoint satisfies the desirable properties 27 : for use in computer security schemes. 28 : 29 : 30 : \section{What is trust?} 31 : 32 : The concept of trust is both well known and widely used in all kinds 33 : of human interactions. However one chooses to interpret the 34 : tantalizing quotation above, it should indicate that trust is a 35 : subjective and highly non-trivial issue. 36 : 37 : Trust is something that humans hold both for 38 : one another or sometimes for inanimate objects (I trust my computer 39 : to give the right answer''). In computer systems, the concept of 40 : trust is especially used in connection with security. In risk analysis 41 : one considers a secure system to be one in which every possible risk 42 : has either been eliminated or accepted as a matter of policy. Trust is 43 : therefore linked to the concept of policy in a fundamental way. 44 : 45 : Trust is also discussed in the case of network security protocols, for instance, 46 : in the case where keys are exchanged. The classic dilemma of key distribution 47 : is that there is often a high level of uncertainty in knowing the 48 : true originator of a secure identifier (cryptographic key). One therefore 49 : hopes for the best and, beyond a certain threshold of evidence trusts'' the 50 : assumption of ownership. Several protocols claim to manage such trust 51 : issues, but what does this really mean? 52 : 53 : In spite of the reverence in which the concept is held, there is no 54 : widely accepted technical definition of trust. This has long be a 55 : hindrance to the discussion and understanding of the concept. The 56 : Wikepedia defines: Trust is the belief in the good character of one 57 : party, they are believed to seek to fulfil policies, ethical codes, 58 : mark 18 law and their previous promises.'' In this chapter, we 59 : mark 1 address the deficiencies of discussions of trust by 60 : introducing a meta-model for understanding trust. Our model can be 61 : used to explain and describe common trust models like trusted third 62 : parties'' and the web of trust''. 63 : 64 : \subsection{Promises -- autonomous claims} 65 : 66 : Trust is an evaluation that can only be made by an individual. No one 67 : can force someone to trust someone else in a given situation. This 68 : basic fact tells us something important about how trust should be defined. 69 : 70 : Recently, one of us has introduced a description of autonomous behaviour in 71 : which individual agents are entirely responsible for their own 72 : decisions\cite{burgessdsom2005,siri1,siri2,siri3}. Promise theory is 73 : a graphical model of policy. The basic responsibility of an 74 : agent to be true to its own assertions is an important step towards a 75 : way of describing trust. 76 : 77 : Promise theory is useful in this regard because all agents are 78 : automatically responsible for their own behaviour and only their own 79 : behaviour. Responsibility is not automatically transitive 80 : between autonomous agents: it has to be arranged through explicit 81 : agreement between agents in a controlled way; hence one avoids 82 : problems such as hidden responsibility that make the question of 83 : whether to trust an individual agent complex. 84 : 85 : In this paper, we argue that the concept of trust can be defined 86 : straightforwardly as a {\em valuation} of a promise -- specifically the {\em 87 : expectation} of autonomous behaviour. When we say that we trust 88 : something, we are directing this towards the instigator of some 89 : promise, whether implicit or explicit. Moreover {\em reputation} is 90 : simply what happens to trust as it is communicated about a network, 91 : i.e. it is a rumour' that spreads epidemically throughout a network along 92 : different paths, and hence develops into a path-dependent estimate of 93 : trustworthiness. 94 : 95 : The matter of evidence-gathering, in order to justify the expectation 96 : value of keeping a promise is subtle, and so we shall discuss this in 97 : some detail. We argue that there is insufficient information in the 98 : notions of trust or reputation to make a reliable estimate of 99 : trustworthiness. Thus trust is an inherently ambiguous concept; each 100 : valuation of trustworthiness is, in essence, an essentially {\em ad 101 : hoc} policy. 102 : 103 : \begin{figure}[ht] 104 : \begin{center} 105 : \includegraphics[width=8cm]{figs/trust} 106 : %\psfig{file=trust.eps,width=8cm} 107 : \caption{The chain of trust from verifiable promises to local trust 108 : by an agent, to global or community trust which we interpret as reputation.\label{trust}} 109 : \end{center} 110 : \end{figure} 111 : 112 : %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 113 : 114 : \section{The literature on trust} 115 : 116 : There is an extensive literature on trust in computer 117 : science\cite{lapadula1,mcilroy1,winkler2,patton04technologies,sang-can,huynh2004a}. Much 118 : of it is concerned with generating protocols for the purpose of 119 : determining the validity of public keys and other identity tokens, or 120 : criticizing these mechanistic views in a wider security perspective. 121 : Here we are mainly concerned with general ideas about trust and 122 : reputation. 123 : 124 : We find the recent work of Kl\"uwer and Waaler to be of interest from 125 : the viewpoint of logic\cite{klwer05trustworthiness,relativetrust}. 126 : These authors present a natural reasoning system about trust which includes 127 : the notion of {\em ordering} by levels of trustworthiness. 128 : 129 : The work that seems closest to ours may be found in ref. 130 : \cite{beth1} and ref. \cite{jossang1}. 131 : Here the authors distinguish between trust and reputation and provide 132 : an epidemic-like procedure for valuating the trust based on some 133 : inference rules and numerical measures that are essentially 134 : reliabilities. The calculation is hence mainly appropriate for a 135 : frequentist interpretation of probability. The authors in ref. 136 : \cite{beth1} are unable to 137 : distinguish trust about different issues, or relate these in their 138 : model. In ref. \cite{jossang1}, an attempt is made at motivating 139 : trust types but the underlying properties of these types is not 140 : completely clear. 141 : 142 : In our proposal: 143 : \begin{enumerate} 144 : \item We allow for multiple sources (types) for which trust and reputation are valuated. 145 : 146 : \item Our combinatorics are based on logic and on Bayesian probability estimates, 147 : which are more appropriate 148 : estimators for the small amounts of experience involved. 149 : \end{enumerate} 150 : 151 : Other work which we find valuable includes social viewpoints of trust 152 : (see ref. \cite{trust1} for a review). This work brings in the matter 153 : of human value judgements, which we feel is an important issue in any 154 : definition of trust, since it is humans who make the final decisions 155 : in practice. From a sociological viewpoint, there are many forms of 156 : currency on which to build trust. Some of these are based on the 157 : outcomes of stand-offs such as economic games, bargaining situations 158 : and so on\cite{axelrod2}. Promises have already been shown to incorporate 159 : these considerations neatly within their framework\cite{siri2}. 160 : 161 : %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 162 : 163 : \section{Common usage of trust and reputation} 164 : 165 : As with most words, the English word trust' has a number of related 166 : meanings which are worth documenting for reference and comparison. 167 : 168 : \begin{itemize} 169 : \item Trust implies a confidence or faith character: 170 : e.g. one trusts in friends and family''. 171 : 172 : \item It might be based on an assessment of reliability: e.g. A trustworthy employee'' 173 : 174 : \item A related, but not identical meaning has to do with presumed safety. 175 : It also means to permit something without fear. I trust the user to 176 : access the system without stealing.'' Such trust can be betrayed. 177 : 178 : This is different because the feeling of safety is not a rationally determined 179 : quantity, whereas reliability is observable and measurable. Thus 180 : there is both a rational and an irrational aspect to trust. 181 : 182 : \item A final meaning of trust is the expression of hope, i.e. and 183 : expectation or wish: "I trust you will behave better from now on"; 184 : 185 : Trust is therefore about the suspension of disbelief. It involves a 186 : feeling of benevolence, or competence on the part of the trustee. 187 : 188 : Trust of this kind expresses an acceptance of risk, e.g. a jewelry 189 : store trusts that passers-by will not smash a plate glass window very 190 : often to steal displayed goods, but rather trusts that the windows 191 : will improve sales. There could therefore be an economic decision 192 : involved in risk-taking. 193 : \end{itemize} 194 : 195 : Reputation is a related notion to trust. We understand this to mean a 196 : received judgement, i.e. an evaluation of an agent's reliability based 197 : on hearsay. Reputation spreads like an epidemic process, but it is 198 : potentially modified on each transmission. Thus, from a given source, 199 : several reputations might emerge by following different pathways 200 : (histories) through a network. 201 : 202 : 203 : %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 204 : 205 : \section{A typed definition of trust} 206 : 207 : 208 : An agent that is known to keep its promises is considered trustworthy 209 : by any normal definition of trust i.e. the agent would be reliable and 210 : predictable such that one could put aside one's doubts about whether 211 : it might fail to live up to its assertions. 212 : 213 : It seems natural then to associate trust with one agent's expectation 214 : of the performance of another agent in implementing its promises. 215 : This could seem like an unnecessarily narrow definition, but it turns 216 : out to be more general than one might expect. What about trust in 217 : matters that have not yet occurred? Clearly, trust could be 218 : formulated about a future {\em potential promise}. i.e. a promise 219 : does not have been made for us to evaluate its likely reliability. The 220 : usefulness of promises is that they encapsulate the relevant 221 : information to categorise intentions and actions. 222 : 223 : \begin{proposal}[Trust] 224 : Trust can be defined as an {\em agent's expectation} that a promise will 225 : mark 18 be kept. It may be assigned a value lying between 0 and 1. 226 : mark 1 \end{proposal} 227 : 228 : We shall define an agent's expectation'' in detail below, and we 229 : shall additionally give meaning to the concepts of when an agent is 230 : deemed to be {\em trustworthy} or {\em trusting} which are global 231 : concepts, different from merely {\em trusted}. This proposal has a 232 : number of positive qualities. To begin with it separates the {\em 233 : experiential} aspect of trust from the {\em nature of the actions} on 234 : which it is based. Thus in terms of philosophy of science, it makes a 235 : clean distinction between empirical knowledge (expectation) and 236 : theoretical knowledge (a promise). 237 : 238 : Our definition is specific. The concept of trust, as normally applied in 239 : computer science is rather universal and non-specific: either one 240 : trusts another agent or one does not; however, it is seldom that we 241 : trust or distrust anyone or anything so completely. Our definition is a 242 : {\em typed} definition, i.e. we gauge trust separately for each 243 : individual kind of promise -- and this is where promises provide a convenient 244 : notation and conceptual stepping stone. We assume that promises are a 245 : more fundamental notion than trust. 246 : 247 : According to our definition, trust is a reliability rating made by some 248 : mark 18 agent that is able to observe agents involved in a promise. We 249 : mark 1 hesitate to call this a reliability {\em measure}: for reasons that we 250 : shall make clear, there is normally insufficient evidence on which to 251 : base a proper reliability estimate, in the sense of reliability 252 : theory\cite{hoyland1}. 253 : 254 : 255 : A reputation is little more than a rumour that spreads epidemically 256 : throughout a network. Common ideas about reputation include. 257 : \begin{itemize} 258 : \item A general opinion of someone.'' 259 : \item A measure of someone's standing in the community.'' 260 : \end{itemize} 261 : Reputation is not necessarily related to trustworthiness. One could 262 : have a reputation based on how much money an agent spends, or how much 263 : fuel it uses. What characterizes a reputation, as opposed to a 264 : personal observation or evaluation, is that it is passed on. One does 265 : not observe the characteristic first hand. 266 : 267 : \begin{proposal}[Reputation] 268 : Reputation can be defined as a valuation of some agent's past or 269 : expected behaviour that is communicated to another agent. 270 : \end{proposal} 271 : 272 : We clarify and develop these basic proposals in the remainder of the paper. 273 : In particular trust will be revisited in more 274 : detail in section 8. 275 : 276 : 277 : \subsection{A general expression for trust} 278 : 279 : Trust is somehow complementary to the idea of a service promise. This 280 : is suggested by the intuition that a promise to {\em use} a service 281 : implies a measure of trust on the part of the receiver. We consider 282 : trust a directed relationship from a {\em truster} to a {\em 283 : trustee}. Moreover, it is a judgement or {\em valuation} of a promise 284 : performed entirely by the {\em truster}. 285 : 286 : We need a notation to represent this, similar to that for promises. In 287 : the spirit of the promise notation, we write the general case as: 288 : \beq 289 : S[T] \trust{b} R[U] 290 : \eeq 291 : meaning that $S$ trusts $R$ to ensure that $T$ keeps a promise 292 : of $b$ to $U$. 293 : 294 : In most cases, this is too much generality. In a world of autonomous 295 : agents, no agent would expect agent $S$ to be able to ensure anything 296 : about agent $T$'s behaviour. The more common case is therefore with 297 : only three parties 298 : \beq 299 : mark 18 A_1[A_2] \trust{b}{A_2}[A_3] 300 : mark 1 \eeq 301 : i.e. agent $A_1$ trusts agent $A_2$ to keep its promise towards some 302 : third-party agent $A_3$. Indeed, in most cases $A_3$ might also be 303 : identified with $A_1$: 304 : \beq 305 : mark 18 A_1[A_2] \trust{b}{A_2}[A_1] 306 : mark 1 \eeq 307 : which, in turn, can be simplified to 308 : \beq 309 : A_1 \trust{b} A_2. 310 : \eeq 311 : In this case, trust is seen to be a dual concept to that of a promise. 312 : If we use the notation of ref. \cite{siri2}, then we can write trust 313 : as one possible valuation $v: \pi \rightarrow [0,1]$ by $A_1$ of the 314 : promise made by $A_2$ to it: 315 : \beq 316 : A_1[A_2] \trust{b} A_2[A_1]~ \leftrightarrow ~v_1(A_2 \promise{b} A_1) 317 : \eeq 318 : This is then a valuation on a par with economic valuations of how much 319 : a promise is worth to an agent\cite{siri2}. The recipient of a promise 320 : can only make such a valuation if it knows that the promise has been 321 : made. 322 : \begin{proposal} 323 : Trust of an agent $S$ by another agent $R$ can exist if agent $R$ is 324 : informed that agent $S$ has made a promise to it in the past, or if 325 : the recipient of the promise $R$ 326 : is able to infer by indirect means that $S$ has made such a 327 : promise. 328 : \end{proposal} 329 : Thus any agent can formulate a trust policy towards any other agent. 330 : The only remaining question is, on what basis should such a judgement 331 : be made? 332 : 333 : Our contention is that the most natural valuation to attach to trust is 334 : an agent's estimate of the expectation value that the promise will be 335 : kept, i.e. an estimate of the reliability of the agent's 336 : promise. 337 : \beq 338 : A_1[A_2] \trust{b} A_2[A_1]~ \policy ~E_1(A_2 \promise{b} A_1) 339 : \eeq 340 : where $\policy$ means is defined by policy as', and the expectation 341 : value $E_R(\cdot)$, for agent $R$ has yet to be defined (see Appendix 342 : A for these details). We note the essential difficulty: that such 343 : valuations of reliability are not unique. They are, in fact, entirely 344 : subjective and cannot be evaluated without ad hoc choices of a number 345 : of free parameters. We return to this point below. 346 : 347 : 348 : 349 : 350 : 351 : %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 352 : 353 : \section{Cases: The underlying promises for trust idioms} 354 : 355 : To ensure that our definition of trust is both intuitive and general, 356 : we present a number of use-cases' below and use these to reveal, in 357 : each case, the expectation of a promise that underlies the trust. 358 : In each case, we write the declarations of trust, in notation, 359 : in words, and as an expectation value of an underlying promise. 360 : In some cases, the expressions of trust are ambiguous and support several 361 : interpretations which can only be resolved by going to a deeper explanation 362 : in terms of promises. 363 : \begin{itemize} 364 : 365 : mark 18 \item {\em I trust my computer to give me the right answer.} 366 : mark 1 This could literally mean that one trusts the computer, as a potentially unreliable piece 367 : of hardware: 368 : \beq 369 : mark 18 {\rm Me} \trust{\rm right~answer}{\rm Computer} \policy E_{\rm {\rm Me}}({\rm Computer} \promise{\rm answer} {\rm Me}) 370 : mark 1 \eeq 371 : i.e. I expect that the computer will keep its (implicit) promise to furnish me with the correct answer. 372 : 373 : However, there is another interpretation. 374 : We might actually (even subconsciously) mean that we trust the company that produces 375 : the software (the vendor) to make the computer deliver the right answer when asked, i.e. 376 : I expect the promise by the vendor to me, to make the computer give me the right answer, will 377 : be kept. 378 : \beq 379 : mark 18 [{\rm Me}][{\rm Computer}] 380 : \trust{\rm answer}{[{\rm Vendor}]} 381 : [{\rm Me}]~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nonumber\\ 382 : mark 1 \policy E_{\rm Me}\left( [{\rm Vendor}][{\rm Computer}] \promise{{\rm Answer}} [{\rm Me}][{\rm Me}]\right) 383 : \eeq 384 : In either case, the relationship between the promise, the expectation and the trust is the same. 385 : 386 : \item {\em I trust the identity of a person (e.g. by presence, public key or signature).} 387 : 388 : This is one of the classic problems of security systems, and we find that the simple 389 : statement hides a muddle of possibilities. It has many possible interpretations; however, in 390 : each case we obtain clarity by expressing these in terms of promises. 391 : 392 : \beq {\rm Me} \trust{\rm Authentic}{{\rm Signature}} \policy E_{{\rm Me}}({\rm Signature} 393 : \promise{\rm Authentic} {\rm Me}) 394 : \eeq 395 : In this version, we place trust in the implicit promise that a credential makes of being 396 : an authentic mark of identity. This is a simple statement, but we can be sceptical of the 397 : ability of a signature to make any kind of promise. 398 : 399 : \beq {\rm Me}[{\rm Signature}] \trust{\rm Authentic}{{\rm Certifier}}[\rm Me]~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \nonumber\\ 400 : \policy E_{{\rm Me}}({\rm Certifier}[{\rm Signature}] 401 : \promise{\rm Authentic} {\rm Me}) 402 : \eeq 403 : i.e. I trust a Certifying Agency to ensure that the implicit promise 404 : made by the credential to represent someone is kept. Or I expect the 405 : certifying agency (possibly the originator of the signature himself) 406 : to keep a promise to me to ensure that the signature's promise to me 407 : is kept (e.g. the technology is tamper-proof). 408 : 409 : Yet a third interpretation is that the trust of the key is based 410 : on the promise to verify its authenticity, on demand. This is the 411 : common understanding of the trusted third party''. 412 : \beq 413 : {\rm Me} \trust{\rm verify~ key} {\rm Certifier} 414 : \policy E_{\rm Me}\left( 415 : {\rm Certifier} \promise{\rm verify~key} {\rm Me} 416 : \right) 417 : \eeq 418 : i.e. I trust that the key has been authorized and is verifiable by the named 419 : Certification Agency. This last case avoids the problem of why one should 420 : trust the Certifying Agency, since it refers only to the verification service 421 : itself. 422 : 423 : \item A similar problem is encountered with currency denominations, e.g. 424 : pound notes, dollars, or Euros. These tokens are clearly not valuable 425 : in and of themselves; rather they represent value. Indeed, on British 426 : Pound notes, the words I promise to pay the bearer on demand the sum of ... X 427 : pounds'' is still found, with the printed signature of the Chief 428 : mark 18 Cashier. Indeed, the treasury would at one time, if pressed, redeem the value of 429 : mark 1 these paper notes in gold. Thus trust in a ten pound note may be 430 : expressed in a number of ways. 431 : 432 : We trust the note to be legal tender: i.e. 433 : \beq 434 : {\rm Me} \trust{\rm legal} {\rm Note} \policy E_{\rm Me} 435 : \left( 436 : mark 18 {\rm Cashier} \promise{\rm gold \OR note} {\rm Me} 437 : mark 1 \right) 438 : \eeq 439 : we expect that the chief cashier will remunerate us in gold on presenting 440 : the note. Alternatively, we assume that others will promise to accept the 441 : note as money in the United Kingdom (UK): 442 : \beq 443 : {\rm Me} \trust{\rm legal} {\rm Note} \policy E_{\rm Me} 444 : \left( 445 : {\rm S} \promise{\rm U({\rm note})} {\rm Me} 446 : mark 18 \right),~~ \forall S \in {\rm UK} 447 : mark 1 \eeq 448 : Interestingly neither dollars nor Euros make any much promise. Rather, the 449 : mark 18 dollar bill merely claims In God we trust''\endnote{It is a matter of belief 450 : whether one assigns this trust to a promise made by an agent called God.}. 451 : mark 1 452 : 453 : \item {\em Trust in family and friends.} 454 : 455 : This case is interesting, since it is so unspecific that it could be 456 : assigned almost any meaning. Indeed, each agent is free to define its 457 : mark 18 meaning autonomously. For some bundle of one or more promises ${\cal P}^*$ (see notation $\Rightarrow$ in section \ref{bundles}), 458 : mark 1 \beq 459 : mark 18 {\rm Me} \trust{\rm {\cal P}^*}{\{\rm Family}\} \policy E_{\rm {\rm Me}}\left( \{{\rm Family}\} \bundle{\rm 460 : {\cal P}^*} A_i\right) 461 : mark 1 \eeq 462 : i.e. for some arbitrary set of promises, we form an expectation about the likelihood 463 : that family and friends would keep their respective promises to the respective 464 : promisees. These promises might, in fact, be hypothetical and the evaluations mere 465 : beliefs. On the other hand, we might possess actual knowledge of these transactions, 466 : and base judgement on the word of one of these family/friend members to keep their 467 : promises to the third parties: 468 : \beq 469 : mark 18 {\rm Me} \trust{\rm {\cal P}^*}{\{\rm Family\}} \policy E_{\rm {\rm Me}}\left( {\{\rm Family\}} \bundle{\rm 470 : {\cal P}^*}{\rm Me} [A_i]\right) 471 : mark 1 \eeq 472 : 473 : 474 : \item {\em A trustworthy employee.} 475 : 476 : In this case, one bases trustworthiness is based more on a history of delivering on promises 477 : made in the context of work, e.g.: 478 : \beq 479 : {\rm Boss} \trust{\rm Deliver} {\rm Employee} \policy E_{\rm Boss}({\rm Employee} \promise{\rm Deliver} {\rm Boss}) 480 : \eeq 481 : 482 : \item {\em I trust the user to access the system without stealing.} 483 : 484 : Here the promise is not to steal. The promise does not have to have 485 : been made explicitly. Indeed, in civil society this is codified into 486 : law, and hence all agents implicitly promise this by participating in 487 : that society. 488 : 489 : \item {\em I trust you will behave better from now on!''} 490 : 491 : This can be understood in two ways. In the first interpretation, this 492 : is not so much an evaluation of trust as it is a challenge (or even 493 : warning) to the agent to do better. Alternatively, it can be taken literally 494 : as an expression of belief that the agent really will do better. In the latter 495 : case, it is: 496 : \beq 497 : {\rm Me} \trust{\rm Do~ better} {\rm You} \policy E_{\rm Me}\left( 498 : {\rm You} \promise{\rm Do~better} {\rm Me} 499 : \right) 500 : \eeq 501 : 502 : 503 : \end{itemize} 504 : 505 : 506 : %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 507 : 508 : \section{Expectations of ensembles and compositions of promises} 509 : 510 : We are not done with policy's intrusion into the definition of 511 : expectation. Since promises can be composed according to 512 : straightforward rules, we must be able to compute two distinct things: 513 : \begin{enumerate} 514 : \item The expectation of a composition of promises that coexist. 515 : \item The composition of expectations from different ensembles. 516 : \end{enumerate} 517 : The difference between these is analogous to the difference between 518 : the combinations of experimental data into ensembles for computing 519 : probabilities, and the composition of different probable inputs in 520 : fault trees (with $\CAND$, $\COR$, $\CXOR$, etc). 521 : 522 : We have already discussed the composition of data sets into ensembles, 523 : the effect this has on probabilities, and how this is expressed in terms 524 : of the basic expectation values in section \ref{ensemble} 525 : 526 : We shall have need to define the meaning of the following in order to 527 : determine the trust deriving from compound promises: 528 : 529 : 530 : \begin{enumerate} 531 : \item The expectation of incompatible promises. 532 : \item The expectation of a composition of parallel promises between a pair of agents. 533 : \item The expectation of a composition of serial promises between a chain of agents. 534 : \end{enumerate} 535 : 536 : 537 : 538 : \subsection{Parallel promise (bundle) expectation} 539 : 540 : When promises are made in parallel, the question arises as to how much 541 : to trust them as a bundle. Should one ever base one's trust on a 542 : complete package or bundle of promises? This is a subjective 543 : judgement based on whether certain promises are related in the view of 544 : the promisee. If one makes an expectation valuation for each promise 545 : individually, does it make sense to combine them as probabilities, 546 : e.g. in the manner of a fault tree\cite{burgessbook2,hoyland1}. One 547 : is used to the probability composition rules for binary logic of 548 : independent events. 549 : 550 : 551 : \begin{itemize} 552 : mark 18 \item ($\CAND$, $\AND$): If the promisee is 553 : mark 1 dependent on several mutually reinforcing promises, then $\CAND$ 554 : semantics are a reasonable assumption. In a security situation, this 555 : might be reasonable. The multiplicative combination rule means that each 556 : additional promise that must be in place reduces the total trust that the 557 : promiser will keep all of its promises proportionally. 558 : 559 : mark 18 \item ($\COR$, $\OR$) Here one says that if one or more promises are kept, then 560 : mark 1 trustworthiness is reinforced. This is an optimistic policy which 561 : seems to suggest that the promisee is understanding about the promiser's 562 : mark 18 potential difficulties in keeping a promise. 563 : mark 1 564 : \item ($\CXOR$): An alternative scenario is to have a number of promises that are 565 : alternatives for one another. For instance, mutually exclusive 566 : conditional promises that behave like a switch: e.g. 567 : \beq 568 : mark 18 S \promise{x ~\CXOR~ x'} R \equiv 569 : \left\{\begin{array}{c} 570 : S \promise{x|y} R\\ 571 : S \promise{x'|\neg y} R 572 : \end{array} 573 : \right. 574 : , 575 : mark 1 \eeq 576 : i.e. $S$ promises $x$ to $R$, iff $y$, else it promises $x'$. 577 : 578 : \item ({\sc RANKED}) If the promises are ranked in their importance to the recipient, 579 : then the measure of trust associated with the package is best judged by weighting the 580 : importance appropriately. Referring to the discussion in section \ref{ensemble}, this 581 : admits a general convex combination of contributions for ranking an $\COR$ (see below). 582 : \end{itemize} 583 : Let us consider how these are represented as functions. 584 : 585 : \begin{definition}[Expectation of a promise bundle] 586 : Let $S$ (sender) and $R$ (recipient) be agents that make a number of promises in parallel, 587 : the composition of a bundle of parallel promises $S \promise{b^*} R$ 588 : is a function $F_R$ of the expectations of the individual promises: 589 : \beq 590 : E_{R}\left(S \promise{b^*} R\right) \policy F_{R} \left( E_{R}\left( S \promise{b_1} R\right),E_{R}\left( S \promise{b_2} R\right),\ldots\right) 591 : \eeq 592 : \end{definition} 593 : 594 : 595 : The function $F_R$ is a mapping from $N$ promise expectations to a new expectation value: 596 : \beq 597 : F_R : [0,1]^N \rightarrow [0,1] 598 : \eeq 599 : Several such functions are known from reliability theory, e.g. in fault tree 600 : analysis (see for instance ref. \cite{hoyland1}). Examples include, 601 : \beq 602 : F^{\rm AND}_{R} 603 : \left(S \promise{b^*} R\right) 604 : &=& 605 : \prod_i 606 : E_{R}\left(S \promise{b_i} R\right)\\\nonumber\\ 607 : F^{\rm OR}_{R} 608 : \left(S \promise{b^*} R\right) 609 : &=& 610 : 1-\prod_i 611 : \left( 1 - E_{R}\left(S \promise{b_i} R\right)\right)\nonumber\\ 612 : &\simeq& \sum_i E_{R}\left(S \promise{b_i} R\right) ~\pm~ O(E^2)\\ 613 : F^{\rm XOR}_{R} 614 : \left(S \promise{b^*} R\right) 615 : &\simeq& 616 : 1-\prod_i 617 : \left( 1 - E_{R}\left(S \promise{b_i} R\right)\right)\nonumber\\ 618 : &\simeq& \sum_i E_{R}\left(S \promise{b_i} R\right) ~\pm~ O(E^2). 619 : \eeq 620 : where $O(E^2)$ denotes terms or the order of the probability squared, which are small. 621 : A further possibility is to take a weighted mean of the promise 622 : estimates. This better supports the view in section \ref{ensemble} 623 : about different sizes ensembles and their relative weights. There 624 : might be additional (irrational) reasons for giving priority to 625 : certain promises, e.g. leniency with respect to a difficult promise. 626 : 627 : To combine the different possibilities (analogously to fault trees) one could 628 : first reduce products of $\CAND$ promises into sub-bundles, then recombine these 629 : using a weighted estimate. 630 : \beq 631 : F^{\sc RANKED}_{R} &\policy& \sum_i \alpha_i E_{R}\left(S \promise{b_i} R\right)\nonumber\\ 632 : \sum_i \alpha_i &=& 1 633 : \eeq 634 : 635 : Note that, due to the reasoning of probability theory, the expectation 636 : of something AND something else is less than the probability of 637 : either. This might be seen as pessimistic as far as trust is 638 : concerned. We have to make a policy decision about whether or not to 639 : place any weight on the combined expectation of a bundle of promises, 640 : or whether to decide to only allow individual expectations. 641 : 642 : For example, suppose an agent makes two contradictory promises 643 : about services levels, e.g. promise to respond in 4ms and promise 644 : to respond in 5ms. 645 : \beq 646 : S &\promise{4}& R\nonumber\\ 647 : S &\promise{5}& R 648 : \eeq 649 : Formally, this is a conflict, since both promises cannot be true 650 : at the same time. The trust in each individual promise can be 651 : estimated independently for the two promises. The agent reliability 652 : expectations of delivering 4'' or 5'' units of service are: 653 : \beq 654 : R \trust{4} S = E_R(4) &=& p(4) = 0.1\\ 655 : R \trust{5} S = E_R(5) &=& p(5) = 0.2 656 : \eeq 657 : Then we can consider what the expectation of the combination of promises 658 : is. If the agent $S$ makes both promises simultaneously, the 659 : expectation of the combined promises will be: 660 : \beq 661 : E_R(4 ~\CXOR~ 5) &\simeq& \frac{(e_4\, E_R(4) + e_5\, E_R(5))}{(e_4+e_5)} 662 : \eeq 663 : where $e_4$ is our estimate of likelihood the agent can deliver 4'' 664 : and $e_5$ is the estimate of likelihood of delivering 5''. 665 : These beliefs can be based on many potential sources of information, chosen as a matter 666 : of policy; one possibility is to simply identify $e_4 \policy E_R(4)$ 667 : and $e_5 \policy E_R(5)$. Thus a simple policy solution could be 668 : to take 669 : \beq 670 : E_R(4 ~\COR~ 5)~ \policy~ \frac{E_R(4)^2+E_R(5)^5}{E_R(4)+E_R(5)} = 0.17 671 : \eeq 672 : i.e. in general a sum of squares. 673 : 674 : \subsection{Incompatible promise expectation} 675 : 676 : For incompatible promises we must have at least complementary behaviour ({\sc NOT}): 677 : \beq 678 : E_A(S \promise{\neg b} R) &=& 1 - E_A(S \promise{b} R)\nonumber\\ 679 : F_R(E_R(S \promise{\neg b} R)) &=& 1 - F_R(E_R(S \promise{b} R)) 680 : \eeq 681 : Ideally incompatible promises would not be made, without conditionals 682 : to select only one of the alternatives. 683 : 684 : In the case of $\CAND$ it is necessary already to resolve the 685 : ambiguity in the meaning of the combination of incompatible 686 : promises. It is by definition a logical impossibility for incompatible 687 : promises to be kept. Thus, while we cannot prevent an agent from promising 688 : such nonsense, our expectation of the combination ought 689 : to be zero. 690 : \begin{definition}[Expectation of incompatible promises with $\CAND$] 691 : 692 : The expectation of incompatible promises, 693 : \beq 694 : F_R\left(A_1 \promise{ b_1} A_2 ~\CAND ~A_1 \promise{ b_2} A_2\right) \equiv 0 ~~{\rm when}~ b_1 \# b_2 695 : \eeq 696 : is defined to be zero for any rational agent. 697 : \end{definition} 698 : Hence, in the example above, 699 : \beq 700 : E_R(4 ~\CAND ~5) &=& 0. 701 : \eeq 702 : 703 : \subsection{Serial promise expectation and transitivity of trust} 704 : 705 : Several systems base their operation on the idea that trust is to some 706 : extent transitive. The Web of Trust'' notion in public key 707 : management idea proposes that trust can be conferred transitively. This 708 : is not a property of promises, so it is of interest to consider how 709 : this works. In other words, if $A_1$ trusts $A_2$ to do $b$, and $A_2$ 710 : trusts $A_3$ to do $b$, then $A_1$ will often trust $A_3$ to do $b$. Here 711 : $b$ is generally taken to be reveal one's true identity''. This 712 : notion does not fit well with a promise theory interpretation of trust 713 : because it is type-unspecific. 714 : 715 : This is easy to see by noting that 716 : \beq 717 : A_1 \promise{b} A_2 , A_2 \promise{b} A_3 \not\imply A_1 \promise{b} A_3 718 : \eeq 719 : i.e. if $A_1$ makes a promise of $b$ to $A_2$ and $A_2$ makes the same 720 : promise to $A_3$, it does not follow that $A_1$ has made any promise to 721 : $A_3$. 722 : 723 : An unspecific trust model might conform to the following property: 724 : \beq 725 : (i)~~ (A_1 \ctrust A_2) , (A_2 \ctrust A_3) \imply A_1 \ctrust A_3 726 : \eeq 727 : In terms of promises, we would interpret this to mean that, if $A_1$ 728 : trusts $A_2$ (to keep promises to $A_1$) and $A_2$ trusts $A_3$ (to keep 729 : promises to $A_2$) then $A_1$ should trust $A_3$ to keep promises to 730 : $A_1$. This is far from being a rational policy, since there is no 731 : evidence passed on about the reliability of agents. 732 : A less problematic alternative is: 733 : \beq 734 : (ii)~~ (A_1 \trust{\rm inform} A_2) , (A_2 \trust{b} A_3) \imply A_1[A_3] \trust{b} A_3[A_2] 735 : \eeq 736 : If $A_1$ trusts $A_2$ (to inform it about its relations with $A_3$) 737 : and $A_2$ trusts $A_3$ (to keep its promise of $b$ 738 : to $A_2$), then $A_1$ trusts that $A_3$ is trustworthy in its promise of $b$ to $A_2$. 739 : 740 : The matter of serial promises is one of diverging complication. 741 : We make some brief notes about the problems associated with serial 742 : promises, and leave the potentially extensive details for elsewhere. 743 : The problems with trusting a distributed collection of promises 744 : are 745 : \begin{enumerate} 746 : \item Promises are not common knowledge, so we do not have all the information. 747 : \item Promises are not transitive. 748 : \end{enumerate} 749 : 750 : Knowledge about the promises and the local evaluations by the agents 751 : can only be guaranteed by making chains of promises between the agents 752 : to share this knowledge. 753 : \beq 754 : A_1 & \promise{\rm tell\,rep}~ A_2 ~\promise{\rm tell\,rep}& A_3\nonumber\\ 755 : A_1 & \stackrel{\pi:U({\rm tell\,rep})}{\longleftarrow}~ A_2 ~\stackrel{\pi:U({\rm tell\,rep})}{\longleftarrow}& A_3 756 : \eeq 757 : In order to pass on the necessary information about trust to a third 758 : party, it must be relayed. Expectation of a chain of promises depends 759 : on a chain of such trust and Use(trust) promises. However, each agent 760 : in the chain agrees only to trust the previous agent. There is no 761 : automatic agreement to trust the previous members. If one were to make 762 : an explicit promise to trust each agent's information about trust, 763 : this would require a promise graph like the one in fig. \ref{chain}. 764 : \begin{figure}[ht] 765 : \begin{center} 766 : \includegraphics[width=8cm]{figs/chain} 767 : %\psfig{file=chain.eps,width=6cm} 768 : \caption{A chain of trust promises to transfer some valuation of trust in 769 : one direction (only), from node 770 : $a$ to each agent up to node $d$. This method is unreliable because 771 : nodes $b$ and $c$ are under no obligation to pass on the correct 772 : value. Note that these are promise arrows, not trust arrows.\label{chain}} 773 : \end{center} 774 : This is clearly a fragile and somewhat complicated structure. An 775 : alternative approach is to avoid chains of greater length than one, 776 : and also eliminate the extraneous and essentially impotent promises 777 : from the chain, as in fig. \ref{mr}. However, this leads us merely 778 : back to the notion of a centralization, either in the form of a 779 : trusted party for all agents, or as a complete peer-to-peer graph. 780 : \end{figure} 781 : In order to remove the ambiguity of the trust promises, we must use a 782 : different {\em promise type} for trust about each agent in the graph. 783 : i.e. the trust passed on from agent $a$ must retain this label in 784 : being transferred. However, here one has a paradox: if an agent is potentially 785 : unreliable, then it can easily lie about this information. 786 : Such serial chains are, in general fraught with uncertainty, thus 787 : agents might well choose, as a matter of policy, to disregard 788 : reputations. 789 : \begin{figure}[ht] 790 : \begin{center} 791 : \includegraphics[width=8cm]{figs/chainfix} 792 : %\psfig{file=chainfix.eps,width=3cm} 793 : \caption{A more reliable approach of passing on the trust node $a$ holds 794 : on to nodes $b$, $c$ and $d$.\label{mr}} 795 : \end{center} 796 : \end{figure} 797 : 798 : 799 : %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 800 : 801 : 802 : 803 : \section{Policy and rationality} 804 : 805 : What kind of policy should be employed in defining the expectation of 806 : future behaviour? Probability theory is built on the assumption that 807 : past evidence can motivate a prediction of the future. At the heart of 808 : this is an assumption that the world is basically constant. However, 809 : future prediction is the essence of gambling: there are scenarios in 810 : which evidence of the past is not an adequate guide to future 811 : behaviour. An agent might also look elsewhere for guidance. 812 : 813 : \begin{itemize} 814 : \item {\em Initialization}: An agent of which we 815 : have initially no experience might be assigned an initial trust value 816 : of $1, \2,$ or $0$ if we are respectively trusting, neutral or 817 : un-trusting by nature. 818 : 819 : \item {\em Experience}: One's own direct experience of 820 : a service or promise has primacy as a basis for trusting an agent in a 821 : network. However, an optimistic agent might choose not to allow the 822 : past to rule the future, believing that agents can change their 823 : behaviour, e.g. the agent was having a bad day''. 824 : 825 : \item {\em Advice}: An agent might feel that it is not the best judge 826 : and seek the advice of a reputable or trustworthy agent. Let's see 827 : what X thinks''. We shall use this idea in section \ref{central} 828 : to define a global trustworthiness. 829 : 830 : \item {\em Reputation}: 831 : Someone else's experience with a promise can serve as an initial value 832 : for our own trust. 833 : 834 : 835 : \item {\em Damnation}: Some agents believe that, 836 : if an agent fails even once to fulfil a promise, then it is completely 837 : un-trustworthy. This extreme policy seems excessive, since there might 838 : be reasons beyond the control of the agent that prevent it from 839 : delivering on its promise. 840 : 841 : \end{itemize} 842 : 843 : 844 : If we lack any evidence at all about the trustworthiness of an agent 845 : with respect to a given promise, we might adopt a policy of 846 : using the agent's record of keeping other kinds of promises. 847 : 848 : \begin{proposal}[Transference of evidence] 849 : In the absence of direct evidence of type $t(b)$, in a promise body 850 : $b$, one may use a policy determined mixture of values from other 851 : types as an initial estimate. 852 : \end{proposal} 853 : The rationality of such a procedure can easily be questioned, 854 : but there is no way to rule out the ad hoc decision as a matter 855 : of policy. 856 : 857 : 858 : 859 : %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 860 : 861 : \section{Reputation} 862 : 863 : We have defined a reputation to be simply a valuation of something 864 : (not necessarily a promise) received by an agent about some other 865 : agent. A natural basis for reputation (and one that is used on 866 : reputation systems' in computing) is the valuation of 867 : trustworthiness. Here we consider the effect that such transmission 868 : of information has on the local trust within a network of agents. 869 : 870 : \subsection{Borrowed trust} 871 : 872 : Suppose that and agent $T$ trusts an agent $S$ to keep its promise to 873 : $R$ with probability $E_T\left( S\promise{b} R\right)$, and suppose 874 : that this agent $T$ promises to transmit this as $S$'s reputation to 875 : another agent $U$. $U$'s estimate of the trustworthiness of $T$'s 876 : communication is 877 : \beq 878 : U \trust{\rm reputation} T \policy E_U\left( T \promise{\rm reputation} U\right) 879 : \eeq 880 : Can we say what $U$'s expectation for the reliability of the original 881 : promise $a\promise{b} c$ should be? In spite of the fact that 882 : probabilities for independent events combine by multiplication, it 883 : would be presumptuous to claim that 884 : \beq 885 : E_U\left(S\promise{b} R\right) = E_U\left( T \promise{\rm reputation} U\right) 886 : E_T\left( S\promise{b} R \right), 887 : \eeq 888 : since $U$ does not have any direct knowledge of $E_T\left( 889 : S\promise{b} R \right)$, he must evaluate the trustworthiness and 890 : reliability of the source. 891 : 892 : Suppose we denote the communicated value of $E_T\left( S\promise{b} R \right)$ 893 : by ${\cal E}_{U\leftarrow T}\left( S\promise{b} R \right)$, then one could conceivably 894 : (and as a matter of rational policy) choose to define 895 : \beq 896 : E_U\left(S\promise{b} R\right) \policy E_U\left( T \promise{\rm reputation} U\right) 897 : {\cal E}_{U\leftarrow T}\left( S\promise{b} R \right). 898 : \eeq 899 : With this notation, we can conceivably follow historical paths through 900 : a network of promises. 901 : 902 : However, it is important to see that no agent is obliged to make such 903 : a policy. Thus trust and reputation do not propagate in a faithfully 904 : recursive manner. There is, moreover, in the absence of complete 905 : and accurate common knowledge by all agents, an impossibility of eliminating the 906 : unknowns in defining the expectation values. 907 : 908 : 909 : \subsection{Promised trust} 910 : 911 : Trust is an evaluation that is private to an agent. This evaluation 912 : can be passed on in the form of a communication (leading to 913 : reputation), or it can be passed on as a promise to trust. 914 : 915 : \begin{itemize} 916 : \item $S$ promises $R$ that $S$ will trust $R$: $S \promise{\tau=0.6} R$. 917 : \item $S$ promises $R$ that $S$ will trust $T$: $S \promise{\tau=0.6} R[T]$. 918 : \end{itemize} 919 : Why would anyone promise a party ($R$) to trust $T$ without telling $R$? 920 : One reason is that there might be strategic bargaining advantages to doing this\cite{schelling1}. 921 : 922 : 923 : \subsection{Updating trust with reputation} 924 : 925 : An agent can use the reputation of another agent as a sample of 926 : evidence by which to judge its trustworthiness. It can then attach a 927 : certain weight to this information according to its belief, in order 928 : to update its own trust. The weighted addition modifies the old trust 929 : value $T$ with the new reputation data $R$. 930 : \beq 931 : E \mapsto \frac{w_{\rm new} R + w_{\rm old} T}{w_{\rm new}+ w_{\rm old}} 932 : \eeq 933 : This is indistinguishable from a Bayesian update. 934 : 935 : %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 936 : 937 : 938 : 939 : %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 940 : 941 : \section{Global Measures of Trust}\label{central} 942 : 943 : Which are the most trusted agents in a network? 944 : Trust has so far been measured at the location of each individual 945 : agent. The valuation is private. A trust valuation becomes an agent's 946 : reputation when the valuation is passed on to others. The passing-on 947 : includes a revisional belief process too; this is also a Bayesian 948 : posterior probability update process, just like the case of basing 949 : trust on different ensembles in section \ref{ensemble}. 950 : 951 : 952 : Let us postulate the existence of a vector of received trusts that is 953 : available to any particular agent. The agent is then able to combine 954 : this information to work out a global measure, which we can call 955 : {\em community trust}. This is analogous to the graphical security model in 956 : \cite{burgessC12}. 957 : 958 : The trust matrix $T$ is defined as follows. The $(A,B)$-th element of the 959 : matrix 960 : \beq 961 : T_{AB}(b) \equiv E_A(B \promise{b} *) 962 : \eeq 963 : is $A$'s trust in $B$ with respect to all promises of type $b$. 964 : 965 : \begin{definition}[Community trust (Trustworthiness and trustingness)] 966 : The global or community trust is defined by the principal eigenvectors 967 : of $T$ and $T^{\rm T}$. Since this is a transmitted quantity by definition 968 : it is a reputation. 969 : 970 : The global reputations for being {\em trustworthy} $\vec W$ are 971 : defined by the normalized components of the principal eigenvector of 972 : the transpose matrix: 973 : \beq 974 : T_{BA} W_B = \lambda W_A. 975 : \eeq 976 : 977 : The global reputations for being {\em most trusting} $\vec S$ are 978 : defined by the normalized components of the principal eigenvector 979 : \beq 980 : T_{AB} S_B = \lambda S_A. 981 : \eeq 982 : \end{definition} 983 : An agent is said to be trusting if it assigns a high probability of 984 : keeping its promises to those agents that it trusts. An agent is said 985 : to be trustworthy if other agents assign it a high probability of 986 : keeping promises to it. 987 : 988 : Observe that, in the absence of labels about specific agent 989 : relationships, the concepts of {\em trustworthiness} and {\em 990 : trustingness} for an agent $A$ are properties of the global trust graph that 991 : has $A$ as a source, and not of an individual agent, since they are derived 992 : from relationships and by voting. 993 : 994 : We can easily show that this has the property of a proportional vote. 995 : Let $v_i$ denote a vector for the trust ranking, or connectedness of 996 : the trust graph, of each node $i$. Then, the trustworthiness of node $i$ is 997 : proportional to the sum of the votes from all of $i$'s nearest 998 : neighbours, weighted according to their trustworthiness (i.e. it is just 999 : the sum of their trust valuations): 1000 : \beq 1001 : v_i \propto\sum_{j={\rm neighbours\ of\ }i} v_j \ \ . 1002 : \label{evc1} 1003 : \eeq 1004 : This may be more compactly written as 1005 : \beq 1006 : v_i = ({\rm const}) \times \sum_j T_{ij} v_j \ , 1007 : \label{evc2} 1008 : \eeq 1009 : where $T$ is the {\em trust graph adjacency matrix}, whose entries 1010 : $T_{ij}$ are 1 if $i$ is a neighbour of $j$, and 0 otherwise. We can 1011 : rewrite eqn. (\ref{evc2}) as 1012 : \beq 1013 : {T}\,\vec{v} =\lambda \vec v \ . 1014 : \label{evcfin} 1015 : \eeq 1016 : Now one sees that the vector is actually an eigenvector of the trust 1017 : matrix $T$. If $T$ is an $N\times N$ matrix, it has $N$ eigenvectors 1018 : (one for each node in the network), and correspondingly many 1019 : eigenvalues. The eigenvalue of interest is the principal eigenvector, 1020 : i.e. that with highest eigenvalue, since this is the only one that 1021 : results from summing all of the possible pathways with a positive 1022 : sign. The components of the principal eigenvector rank how 1023 : self-consistently central' a node is in the graph. Note that only 1024 : ratios $v_i/v_j$ of the components are meaningfully determined. This 1025 : is because the lengths $|\vec v|= \sqrt{\sum_i v_iv_i}$ of the eigenvectors are not determined 1026 : by the eigenvector equation. We normalize them here by setting the 1027 : highest component to 1. This form of well-connectedness is termed 1028 : 'eigenvector centrality' \cite{bonacich1} in the field of social 1029 : network analysis, where several other definitions of centrality 1030 : exist. 1031 : 1032 : \begin{figure}[ht] 1033 : \begin{center} 1034 : \includegraphics[width=8cm]{figs/centrality} 1035 : %\psfig{file=centrality.eps,width=8cm} 1036 : \caption{An example trust graph. For simplicity all trust arrows are 1037 : assumed of the same type, e.g. trust in the promise to pay bills. 1038 : Dashed lines are lines which will be removed in the second example.\label{exb}} 1039 : \end{center} 1040 : \end{figure} 1041 : 1042 : Note this does not assume any transitivity of trust, it says simply: 1043 : each agent's trust worthiness is equal the sum of all the other 1044 : agents' trust measures (as if they are voting), weighted so that the 1045 : most trustworthy agents' opinions are weighted proportionally highest. 1046 : It is a proportional representation vote by the agents about one another. 1047 : 1048 : \subsection{Example of global trust} 1049 : 1050 : Consider a number of promises of a single type, e.g. agents promise 1051 : to pay their bills in various service interactions. Each payee then 1052 : rates its expectation of the payer and makes this information globally 1053 : available as a public measure of its local trust. Referring to 1054 : fig. \ref{exb}, we assume the following local trusts: 1055 : \beq 1056 : 1& \strust{\rm pay}& 6 = 0.2\\\nonumber 1057 : 2& \strust{\rm pay}& 6 = 0.3\\\nonumber 1058 : 3& \strust{\rm pay}& 7 = 0.1\\\nonumber 1059 : 4& \strust{\rm pay}& 7 = 0.1\\\nonumber 1060 : 5& \strust{\rm pay}& 7 = 0.1\\\nonumber 1061 : 6& \strust{\rm pay}& 7 = 0.6\\\nonumber 1062 : 7& \strust{\rm pay}& 6 = 0.5\\\nonumber 1063 : 6& \strust{\rm pay}& 8 = 0.8\\\nonumber 1064 : 8& \strust{\rm pay}& 6 = 0.2\\\nonumber 1065 : 7& \strust{\rm pay}& 8 = 0.8\\\nonumber 1066 : 8& \strust{\rm pay}& 7 = 0.3 1067 : \eeq 1068 : The trust matrix is thus 1069 : \beq 1070 : T = \left( 1071 : \begin{array}{ccccccc|c} 1072 : 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.2 & 0.0 & 0.0\\ 1073 : 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.3 & 0.0 & 0.0\\ 1074 : 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.1 & 0.0\\ 1075 : 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.1 & 0.0\\ 1076 : 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.1 & 0.0\\ 1077 : 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.6 & 0.8\\ 1078 : 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.5 & 0.0 & 0.8\\\hline 1079 : 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.2 & 0.3 & 0.0\\ 1080 : \end{array} 1081 : \right) 1082 : \eeq 1083 : Note that the bars delineate the dashed lines which will be removed in the 1084 : second example. 1085 : The normalized right eigenvector $\vec S_8$ represents how trusting 1086 : the agents are. The left eigenvector $\vec W_8$ (or the eigenvector of 1087 : the transpose matrix) represents the global trustworthiness: 1088 : \beq 1089 : \vec S_8 = \left( 1090 : \begin{array}{c} 1091 : 0.21\\ 1092 : 0.31\\ 1093 : 0.10\\ 1094 : 0.10\\ 1095 : 0.10\\ 1096 : 1.00\\ 1097 : 0.94\\ 1098 : 0.50\\ 1099 : \end{array} 1100 : \right), ~~~ 1101 : \vec W_8 = \left( 1102 : \begin{array}{c} 1103 : 0\\ 1104 : 0\\ 1105 : 0\\ 1106 : 0\\ 1107 : 0\\ 1108 : 0.55\\ 1109 : 0.65\\ 1110 : 1.00\\ 1111 : \end{array} 1112 : \right) 1113 : \eeq 1114 : Thus, agent 8 is the most trustworthy. Agents 1 to 5 are not trustworthy at 1115 : all in this scenario, since we have not rated any promises made by 1116 : them. Agent 6 is the most trusting of all, since it gives a large 1117 : amount of trust to agent 8. Thus, these two agents colour the global 1118 : picture of trust significantly through their behaviours. 1119 : 1120 : We note that the agents with zero trust ratings are all recipients of 1121 : promises; they do not make any promises of their own. These are 1122 : suppliers of whatever service or good is being sold; they do not 1123 : promise payments to anyone, hence no one needs to trust them to pay 1124 : their bills. The reader might find this artificial: these agents might 1125 : make it their policy to trust the agents even though they have made no 1126 : promise. In this case, we must ask whether the trust would be of the 1127 : same type or not: i.e. would the buyers trust the suppliers to pay 1128 : their bills, or would their trust be based on a different 1129 : promise, e.g. the promise to provide quality goods. 1130 : 1131 : By contrast, the agents who are not trusted are somewhat trusting by 1132 : virtue of receiving such promises of payment. 1133 : 1134 : Suppose we eliminate agent number 8 (by removing the dashed lines in 1135 : the figure), let us see how the ranking changes when we delete this 1136 : important agent. Now agent 6 still remains the most trusting, but 1137 : agent 7 becomes the most trusted, once again mainly due to agent 6's 1138 : contribution. 1139 : 1140 : \beq 1141 : \vec S_7 = \left( 1142 : \begin{array}{c} 1143 : 0.37\\ 1144 : 0.55\\ 1145 : 0.17\\ 1146 : 0.17\\ 1147 : 0.17\\ 1148 : 1.00\\ 1149 : 0.92\\ 1150 : \end{array} 1151 : \right), ~~~ 1152 : \vec W_7 = \left( 1153 : \begin{array}{c} 1154 : 0\\ 1155 : 0\\ 1156 : 0\\ 1157 : 0\\ 1158 : 0\\ 1159 : 0.91\\ 1160 : 1.00\\ 1161 : \end{array} 1162 : \right) 1163 : \eeq 1164 : We can note that the symmetries of the graph are represented in the 1165 : eigenvector in a natural way. 1166 : 1167 : 1168 : \subsection{Boundaries and allegiances} 1169 : 1170 : Canright and Monsen have defined regions of a graph, based on the 1171 : structures that arise naturally from eigenvector 1172 : centrality\cite{roles}. This has been further developed for directed 1173 : graphs in ref. \cite{burgessroles}. Trust is sometimes associated with 1174 : maintaining certain boundaries or allegiances. The global trust model 1175 : proposed above falls into a natural landscape based on the graph, that 1176 : is characterized by local maxima. Agents cluster naturally into 1177 : distinct hills of mutual trust, separated by valleys of more tenuous 1178 : trust, in the centrality function. 1179 : 1180 : This characterization is a useful way of identifying a community 1181 : structure. Humans are not very good at understanding boundaries: they 1182 : understand identities. e.g. a company name, but where is the real 1183 : boundary of the company or computer system? Its tendrils of influence 1184 : might be farther or closer than one imagines. The topology of 1185 : underlying promises offers a quantifiable answer to this question. 1186 : Such allegiances can be compared to the notion of a coalition in game 1187 : theory\cite{morgenstern1,rapoport1}. 1188 : 1189 : 1190 : 1191 : %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 1192 : 1193 : \section{Trust architectures} 1194 : 1195 : Trust is closely associated with information dissemination. There are 1196 : essentially only two distinct models for achieving information 1197 : distribution: centralization and {\em ad hoc} epidemic flooding. 1198 : Alternatively one might call them, central-server versus peer-to-peer. 1199 : 1200 : 1201 : Two so-called trust models are used in contemporary technologies 1202 : today, reflecting these approaches: the Trusted Third Party model 1203 : (e.g. X.509 certificates, TLS, or Kerberos) and the Web of Trust (as 1204 : made famous by the Pretty Good Privacy (PGP) system due to Phil 1205 : Zimmerman and its subsequent clones). Let us consider how these models are 1206 : represented in terms of our promise model. 1207 : 1208 : \subsection{Trusted Third Parties} 1209 : 1210 : 1211 : 1212 : The centralized solution to trust management'' is the certificate 1213 : authority model, introduced as part of the X.509 standard used in web authentication 1214 : and modified for a variety of other systems (See 1215 : fig. \ref{thirdparty})\cite{itut1,x509,rfc3280}. In this model, a central authority has the 1216 : final word on identity confirmation and often acts as a broker between 1217 : parties, verifying identities for both sides. 1218 : 1219 : 1220 : \begin{definition}[Authority] 1221 : An agent which is the source of a promise and whose word 1222 : is beyond doubt (i.e. a trusted party). 1223 : \end{definition} 1224 : 1225 : An central authority promises (often implicitly) to all agents the legitimacy 1226 : of each agent's identity (hopefully implying that it verifies this 1227 : somehow). Moreover, for each consultation the authority promises that 1228 : it will truthfully verify an identity credential (public key) that is 1229 : presented to it. The clients and users of this service promise that 1230 : they will use this confirmation. Thus, in the basic interaction, the 1231 : promises being made here are: 1232 : \beq 1233 : {\rm Authority} &\promise{\rm Legitimate} &{\rm User}\\ 1234 : {\rm Authority} &\promise{\rm Verification} & {\rm User}\\ 1235 : {\rm User} &\promise{U({\rm Verification})} &{\rm Authority} 1236 : \eeq 1237 : To make sense of trust, we look for expectations of the promises 1238 : being kept. 1239 : \begin{enumerate} 1240 : \item The users expect that the authority is legitimate, hence they 1241 : trust its promise of legitimacy. 1242 : \item The users expect that the authority verifies identity correctly, hence 1243 : they trust its promise of verification and therefore use it. 1244 : \end{enumerate} 1245 : Users do not necessarily have to be registered themselves with 1246 : the authority in order to use its services, so it is not strictly 1247 : necessary for the authority to trust the user. However, in registering 1248 : as a client a user also promises its correct identity, and the authority 1249 : promises to use this. 1250 : \beq 1251 : {\rm User} &\promise{\rm Identity}& {\rm Authority}\\ 1252 : {\rm Authority} &\promise{U({\rm Identity})}& {\rm User} 1253 : \eeq 1254 : One can always discuss the evidence by which users would trust the 1255 : authority (or third party). Since information is simply brokered by 1256 : the authority, the only right it has to legitimacy is by virtue of a 1257 : reputation. Thus expectation 1. above is based, in general, on 1258 : the rumours that an agent has heard. 1259 : 1260 : 1261 : \begin{figure}[ht] 1262 : \begin{center} 1263 : \includegraphics[width=8cm]{figs/thirdparty} 1264 : %\psfig{file=thirdparty.eps,width=4.5cm} 1265 : \caption{\small The Trusted Third Party, e.g. TLS or Kerberos. A 1266 : special agent is appointed in the network as the custodian of 1267 : identity. All other agents are expected to trust this. 1268 : The special agent promises to verify the authenticity of an 1269 : object that is shared by the agents. In return for this service, the 1270 : agents pay the special agent.\label{thirdparty}} 1271 : \end{center} 1272 : \end{figure} 1273 : 1274 : Most of the trust is from users to the authority, thus there is a 1275 : clear subordination of agents in this model. This is the nature or 1276 : centralization. 1277 : 1278 : \subsection{Web of Trust} 1279 : 1280 : 1281 : Scepticism in centralized solutions (distrust perhaps) led to the 1282 : invention of the epidemic trust model, known as the Web of Trust (see 1283 : fig. \ref{webtrust})\cite{abdul1}. In this model, each individual 1284 : agent is responsible for its own decisions about trust. Agents 1285 : confirm their belief in credentials by signing one another's 1286 : credentials. Hence if I trust $A$ and $A$ has signed $B$'s key 1287 : then I am more likely to trust $B$. 1288 : 1289 : As a management approximation, users are asked to make a judgement 1290 : about a key from one of four categories: i) definitely trustworthy, 1291 : ii) somewhat trustworthy, iii) un-trustworthy, iv) don't know. 1292 : 1293 : An agent then compares these received valuations to a threshold value 1294 : to decide whether or not a credential is trustworthy to it. 1295 : 1296 : The promises are between the owner of the credential and a random agent: 1297 : \beq 1298 : {\rm Owner} &\promise{\rm Identity} &{\rm Agent} \\ 1299 : {\rm Agent} &\promise{U({\rm Identity})} &{\rm Owner} \\ 1300 : {\rm Agent} &\promise{\rm Signature} &{\rm Owner} \\ 1301 : {\rm Owner} &\promise{U({\rm Signature})} &{\rm Agent} 1302 : \eeq 1303 : The owner must first promise its identity to an agent it meets. The 1304 : agent must promise to believe and use this identity credential. The 1305 : agent then promises to support the credential by signing it, which 1306 : implies a promise (petition) to all subsequent agents. Finally, the 1307 : owner can promise to use the signature or reject it. Trust enters here 1308 : in the following ways: 1309 : 1310 : \begin{enumerate} 1311 : \item The agent expects that the identity of the owner is correct and trusts it. 1312 : This leads to a Use promise. 1313 : \item The Owner expects that the promise of support is legitimate and trusts it. 1314 : This leads to a Use promise. 1315 : \end{enumerate} 1316 : What is interesting about this model is that it is much more 1317 : symmetrical than the centralized scheme. It has certain qualities 1318 : that remind us of our definition of global trust in section \ref{central}. 1319 : \begin{figure}[ht] 1320 : \begin{center} 1321 : \includegraphics[width=8cm]{figs/webtrust} 1322 : %\psfig{file=webtrust.eps,width=9cm} 1323 : \caption{\small In a web of trust an agent signals a 1324 : promise to all other agents that it has trusted the authenticity of 1325 : the originator's identity. As a key is passed around (second figure) 1326 : agents can promise its authenticity by signing it or not. 1327 : \label{webtrust}} 1328 : \end{center} 1329 : \end{figure} 1330 : However, it is not equivalent to our model, since the very nature of 1331 : the web of trust is dictated by the transactions in the model, which 1332 : are automatically bilateral (ours need not be). Moreover, the 1333 : information is passed on in a peer to peer way, where as our global 1334 : idealization makes trust valuations common knowledge (global 1335 : reputations). In some respects, the web of trust is a pragmatic 1336 : approximation to the idealized notion of trust in section 1337 : \ref{central}. The main differences are: 1338 : \begin{itemize} 1339 : \item In the Web of trust, a limited number of expectation values is allowed 1340 : and the user does not control these, i.e. there are few policy choices 1341 : for agent expectation allowed. 1342 : 1343 : \item An agent does not see a complete trust or promise graph. It sees only the local 1344 : cluster to which it is connected. This is sufficient to compute a global 1345 : trust for that component of the graph. 1346 : 1347 : \item The Web of Trust graph is always bilateral, with arrows moving in both directions, 1348 : thus no one is untrusted, or un-trusting. 1349 : 1350 : \item The information to construct a fully self-consistent measure of trust 1351 : is not available in the system. Hence there is no clear measure of 1352 : who is more trustworthy in the web of trust. 1353 : 1354 : \end{itemize} 1355 : 1356 : Some of these limitations could no doubt be removed. A Bayesian 1357 : approach could naturally lead to a better approximation. However, a 1358 : basic flaw in these implementation mechanisms is the need to trust of 1359 : the mediating software itself. Since, as we have shown, trust is not 1360 : necessarily transitive, one ends up in most cases trusting the 1361 : software that is supposed to implement the trust management rather 1362 : than the parties themselves. 1363 : 1364 : \section{Summary} 1365 : 1366 : The concept of promises provides a foundation that has been unclear in 1367 : discussions of trust. It allows us to decouple the probabilistic 1368 : aspect from the network aspect of policy relationships, without 1369 : introducing instantaneous events. It provides (we claim) a natural 1370 : language for specific policies, extended over time. Promises have 1371 : types and denote information flow which in turn allows us to discuss 1372 : what is trusted and by whom. We believe the use of promises to be 1373 : superor to a definition based on actions, since the localization of 1374 : actions as space-time events makes trust ill-defined if the action has 1375 : either not yet been executed or after it has been executed. 1376 : 1377 : Promises allow us to relate trust and trust-reputation in a generic 1378 : way, and suggest an algorithm from which to derive global network 1379 : properties, based on social network theory. This is a significant 1380 : improvement over previous models. Reputation is not uniquely coupled 1381 : to trust, of course -- it can be related to many different valuations 1382 : of promised behaviour, including wealth, kindness etc. 1383 : 1384 : We show how bundles of promises can be combined using the rules for 1385 : probabilistic events (similar to fault tree analysis) and we model the 1386 : two main trust architectures easily. The PGP Web of Trust as well as 1387 : the Trusted Third Party can be explained as a special case the global 1388 : trust models derived here; however standard tools do not permit users 1389 : to see the entire web, or measure relative trust-worthiness in a 1390 : community using these implementations. 1391 : 1392 : In future work there is the possibility to use this notion of trust in 1393 : explicit systems. The Unix configuration system cfengine\cite{cfwww} 1394 : uses the notion of promises and agent autonomy to implement a policy 1395 : based management system. The trustworthiness of hosts with respect to 1396 : certain different behaviours can be measured directly by neighbouring 1397 : agents to whom promises are made. More generally, if one has a 1398 : monitoring system that one believes trustworthy to begin with, it is 1399 : possible to observe whether an agent stops keeping its own promises 1400 : about security issues. This might be a signal to reevaluate one's 1401 : expectation that the system is trustworthy. These tests have been 1402 : partially imeplemented in cfengine and are presently being tested. 1403 : 1404 : Trust is merely an expression of policy and it is therefore 1405 : fundamentally {\em ad hoc}. Promises reveal the underlying motives for 1406 : trust and whether they are rationally or irrationally formed. 1407 :