Chapter 4
Trust, Secrecy, Covertness, and Authentication in Social Networks
We use epinets to unpack and analyze the epistemic structure of trust. We follow the approach of mapping the microstructure of the “epistemic glue” of social interactions and relations and ask, “What must someone think or know about what another thinks or knows when there is a trust relationship between them?” We propose and empirically examine a model of trust as a form of confidence in the competence and integrity of another person, wherein the trustful knows that the trusted knows a fact if that fact is true, and says it if s/he knows it. This unpacking allows us to explore the dark side of the phenomenon of trust—how it can be broken without appearing to be broken—as well as the interplay between breaches of trust in integrity and trust in competence.
. . .
Trust matters: “No single variable . . . so thoroughly influences interpersonal and group behavior . . .” (Golembiewski and McConkie 1975: 31); “Trust is indispensable in social relationships” (Lewis and Weigert 1985: 968); “Trust, in economic exchanges, can be a source of competitive advantage” (Barney and Hansen 1994: 188). The idea that trust is critical to economic activity has a sociological lineage dating back to the early writings of Max Weber (1992) and a lineage in political thought dating back at least to Alexis de Tocqueville (2000: 142), who wrote: “The art of association then becomes . . . the mother of action, studied and applied by all.”
Blau (1964: 64) identified trust as “essential for stable social relationships,” and Durkheim (1933: 28) wrote implicitly of trust when he spoke of the need for a “series of secondary groups,” interposed between the state and the individual, that are “near enough to the individuals to attract them strongly in their sphere of action and drag them, in this way, into the general torrent of social life.” Fukuyama (1995) conceived of “spontaneous sociability” arising from mutual trust as a foundation of economic prosperity, arguing that trust is the means by which culture influences economic behavior, and that “high-trust” and “low-trust” cultures exhibit fundamentally different economic development paths.
These various ideas capture different and sometimes overlapping aspects of trust, as we will see. Our focus here is on the specific epistemic structures that constitute trustful-trusting relationships. In other words, we ask the question “What must agents know or believe about what those whom they trust know or believe in order for their relationship to be one of trust?” By unpacking and analyzing the epistemic glue specific to trust, we accomplish three things: (1) we build models of trust that allow us to look closely at its “dark side”: apparent and real breaches of trust and their authentic and inauthentic repairs; (2) we unpack “dense” social structures such as “circles of trust” and superconductive informational conduits in terms of the epistemic glue that holds them together; and (3) we represent and model moves and strategies that agents may use to manipulate trust-based relationships in a network and thereby influence and change the network’s trust topology.
Trust as Explanans and Explanandum
Throughout these analyses, trust appears implicitly as an explanans of subsequent social behavior rather than an explanandum in need of further elucidation. This situation has motivated several attempts to unpack trust—as a behavioral disposition, as a congruity of mutual expectations, as a special kind of relationship. Each attempt is informative but also unsatisfactory in one or more ways.
Trust as a Behavioral Disposition. Trust can be defined as the instantiation of cooperative behavior in the prisoner’s dilemma game or one of its variants (Lewis and Weigert 1985). Such a definition relies on a view of trust either as an individual disposition to behave in a particular way or as an individual behavior pattern. It leaves out the interpersonal aspect of trust as a relation between the trusting and the trusted (Barber 1983) as well as the cognitive component, which stresses the mutual expectations of the trusted and the trustful (Gambetta 1988). In the same tradition, trust is also defined in negative terms as the absence of high-net-benefit opportunistic behavior. Williamson’s argument (1975, 1985), for example, is that economic transactions are structured to eliminate opportunistic behavior in the transactors. Trust appears as a consequence of the absence of opportunistic behavior when there are no penalties for engaging in it.
Behaviorist definitions make no reference to the underlying beliefs of the trusting and the trusted about each other or about the situation at hand. Trust must therefore be inferred ex post from observation of past behavior. Rendering trust a function of past observed behavior alone also rules out investigation of subtler aspects of trust-based interactions, such as forgiveness for certain apparent or real breaches of trust and mutual forbearance in noisy interactional regimes, which limits the precision of the resulting trust model. When a firm chooses one form of relational contracting over another, we may assume that the rejected form was decided against because of some failure of trust between the firms, but we cannot predict the contract that they will enter into by reference to the beliefs, expectations, and modes of inference that they use.
Moreover, when behavioral researchers refer to the “probability” of an agent taking a particular action, they refer to the disposition or objective probability of that agent doing so, rather than the subjective probability that colors the perception of his/her observers. Thus, an individual’s trustworthiness becomes the only relevant variable, making it difficult to analyze situations in which trust is preserved in the face of seemingly guileful behavior or it breaks down in spite of apparently principled behavior. Such situations are relevant to the dynamics of trust, however, because the interpersonal interpretation process that leads to the buildup and breakdown of trust is grounded not only in direct behavioral observation but also in theories, beliefs, and interpersonal assumptions that influence what constitutes the relevant observations (Kramer 1999).
Trust as an Interpersonal Entity. The interpersonal dimension of trust is emphasized by researchers who focus on the relationship between trustfulness and trustworthiness (Barber 1983) and by those who focus on comprehensive measures of trust in a relationship (Butler 1991; Mayer, Davis, and Schoorman 1995). Interpersonal approaches to trust are better at picking out more nuanced constructs, such as benevolence, that are specific to the trusting-trustful relationship. On this relational basis alone, however, we cannot distinguish between insightful benevolence from someone who knows her benevolence can be taken advantage of but nevertheless chooses to extend it, and credulous benevolence from someone who does not know that she is being taken advantage of, in which case benevolence is more likely to actually be gullibility.
To unpack trust at this level, we must be precise about what agents know and how they know what they know. Interactive reasoning offers a tool for probing such differences. The insightfully benevolent agent knows that the other agent knows that he can be taken advantage of, whereas the gullibly benevolent agent possesses no such interactive knowledge.
For example, Axelrod (1984) recounts the story of a series of prisoner’s dilemma games between a Russian male diplomat and an American female diplomat played under “noisy” conditions in which the observation of a defection of one player by another could have been due either to an error on the part of the reporting procedure or to that player’s intention to defect. The American diplomat came out significantly ahead and explained that she defected more frequently than she would have had there been no noise in the game because she thought the Russian diplomat, on the basis of his cultural stereotypes about the cooperativeness of women, would attribute most of her defections to reporting errors rather than to intent. The Russian diplomat confirmed that he thought of American women as less inclined to defect than American men and had therefore attributed his opponent’s defections to reporting errors.
Trust as a Cognitive-Rational Entity: Congruity of Mutual Expectations of Competence. Friedland (1990) posited the desirability of trustworthy behavior on rational grounds alone: In a game in which each player has a choice between cooperative and uncooperative behavior and the perspective of an indefinite number of future interactions, strategies that started out cooperatively and then retaliated responsively and proportionately for uncooperative behavior were likely to bring greater payoffs than either strategies that were uncooperative from the beginning (exemplifying completely untrusting players) or strategies that were cooperative and unresponsive to uncooperative behavior (exemplifying players who trusted blindly).
Hill (1990) argued that, since transaction costs shape the form of economic exchange and since the lack of trust between two transactors increases transaction costs, a reputation for trustworthiness is economically valuable for an agent who does not know ex ante who his potential business partners might be. Trustworthy behavior builds an individual’s reputation for trustworthy behavior, increasing the expected value of that individual to his prospective partners.
Such cognitive-rational congruity views of trust rest on implicit assumptions about agents’ beliefs about each other’s beliefs. Suppose that Ada trusts Bart to carry out a particular action because she believes him to be a rational person who values his reputation and does not want to damage it by shirking his obligations. Implicitly, Ada assumes that Bart indeed cares about his reputation and that Bart believes that she sees his actions as a signal of his trustworthiness. If Bart is rational but believes his actions are only imperfectly observable by Ada, then he might still rationally act opportunistically and take advantage of her. If Ada knows this as well, then she will rationally not trust Bart.
In a trustful-trusting relationship, it is not enough that agents trust one another, however; each must also trust the other to trust her, trust him trust her to trust him, and so forth. Ada’s rational trust of Bart rests on her beliefs about Bart’s beliefs regarding the situation at hand. In turn, Bart, if he is rational, must consider whether or not his actions will have the desired signaling value. If he believes that Ada is ill disposed toward him and negatively attributes his behavior “no matter what he does,” then Bart may not act cooperatively. If Ada knows this and believes that Bart believes that she is ill disposed toward him, she will not expect him to act cooperatively.
These higher-level beliefs are critical to the joint realization of successful cooperative outcomes among rational agents because, given the usual payoff structure of competition-cooperation decisions (highest payoff for matched cooperative actions, lowest payoff for mismatched actions, medium payoff for matched competitive actions), then each agent must believe that the other knows the payoffs in order for cooperation to be individually rational.
Trust as a Cognitive-Moral Entity: Congruity of Mutual Expectations of Integrity. The moral approach to trust posits norms as guarantors of cooperative behavior because they are ethically justified: “Trust is the expectation by one person, group, or firm of ethically justifiable behavior—that is, morally correct decisions and actions based on ethical principles of analysis—by another person, group, or firm in a joint endeavor or economic exchange” (Hosmer 1995: 399). Granovetter (1985) grounds trust in the expectation that would-be exchange partners will conform to shared norms and obligations, and he cites with approval Arrow’s (1974) idea that economic interactions are grounded in a “generalized morality” that is implicitly agreed on by members of a society. It is rational to act morally and moral to act rationally—the argument goes—and it proceeds to define morality as a commitment to a body of shared norms of behavior. However, an implicit epistemic backbone safeguards valid inference here, which is that it is also rational to assume that everyone else is rational and therefore to believe all will also act morally by virtue of being rational.
To deepen our understanding of trust, we make its epistemic backbone explicit, by asking, “What do interacting individuals have to believe about each other’s beliefs in order to engage in the mutually beneficial behavior that is thought to exemplify a ‘high-trust’ relationship?” and “What is the difference between an ‘attributed’ norm and a ‘shared’ norm?” An attributed norm is one that Anja believes that Bruce shares when in fact Bruce may or may not share it. Thus, Anja may be wrong about the attributions she makes about Bruce. A shared norm, in contrast, is one that Anja and Bruce each subscribe to and correctly believe that the other subscribes to as well, that the other believes that s/he subscribes to, and so on.
Even so, ambiguities remain that can only be resolved by further epistemic analysis. It is difficult for any observer of an action to identify unambiguously the norm or maxim to which an action conforms (Moldoveanu and Stevenson 1998). The very same action taken by an agent may be unjustifiable in one moral system (deontological, say) yet be justifiable in another (utilitarian, say).
Observers’ interpretation of each others’ actions matter. Whether or not Anja believes that Bruce is acting in conformity to a particular norm (N) and not to some other norm (M) is critical to her decision to trust him, having observed his behavior. In turn, Bruce’s belief that Anja believes he is acting in accordance with N (rather than M) is critical to his interpretation of her retaliatory behavior as a breach of trust in its own right or as a rightful retaliation for Anja’s transgression, by her actions, of M. Finally, Anja’s belief that Bruce believed Anja believed that he was acting according to N rather than M is accordingly critical to her subsequent conciliatory strategy and reinterpretation of Bruce’s behavior. A detailed consideration of the epistemic states of agents, and of the interactive epistemology of their predicaments, can usefully inform the way we study rule- and norm-based behavior that is mediated by trust.
Interactive Epistemic States and Trust
Interactive epistemic states form a basis for analyzing the logic of trust, whether a moral or rational basis is used as an interpretive schema for trustworthiness. They are also a basis for distinguishing between these bases. Expecting that trustworthy behavior is in the best interest of an agent’s would-be partner and expecting that the would-be partner shares the specific trust-based ethic of the agent are two plausible grounds for expecting the would-be partner to be trustworthy. Either can work as an explanation of trust, but each works in different ways. As a result, if Alice trusts Bob on rational grounds, and Bob trusts Alice on moral grounds, the resulting interaction can produce coordinative failure and mutual recrimination.
In the case of a “misunderstanding,” Alice may simply try to adjust the incentives of the coordination game or her signaling strategy, but Bob may make a radical negative attribution about Alice’s moral character. Breaches of trust—and their repair—can highlight much about the basis on which trust is extended or rescinded and, indeed, about the very nature of trust in a relationship.
To disambiguate this explanatory conundrum, we distinguish between “trust in integrity,” corresponding roughly to the moral view of trust, and “trust in competence,” corresponding roughly to the rational view of trust, and we argue that the conjoint condition—trust in integrity and trust in competence—explains a wide range of trust phenomena. We define trust in competence as a state of epistemic certainty or knowledge held by one agent that another knows a proposition if that proposition is true (and relevant to the agents’ interaction); we define trust in integrity as a state of epistemic confidence that an agent would assert what s/he knows to be true in a situation when that mattered. We analyze different combinations of trust in integrity and trust in competence, and examine the differences these combinations make.
Conceiving of trust in competence and trust in integrity as epistemic states of networked agents makes it possible for researchers to study trust explicitly by asking agents questions like “What do you believe?” or “What do you think X believes?” rather than implicitly and circularly by asking questions like “Whom do you trust?” It also makes it possible to study trust predictively (by inferring trusting-trustful behavior from response patterns) rather than post-dictively (by inferring these patterns from past behavior).
Because of the precise language in which trust phenomena are represented, however, we must pay special attention to the propositions we focus on; after all, we do not expect a trusted alter to either know or state anything that is true or to know everything there is to know about the trustful ego. We therefore define a range of applicability for our trust model by focusing on the propositions that are relevant to each agent. To do so, building on the analysis in Chapter 3, we focus on propositions that are important to the co-mobilization and coordination of two or more agents in a network.
We show that what matters in a wide range of contexts is what agents think about what other agents think and about what other agents think they think about a situation. We accordingly posit that trust in integrity and trust in competence should be concomitant to a more coherent set of level 2 (I think you think) and level 3 (I think you think I think) beliefs, as agents who trust one another are more likely to exhibit coherent interactive belief hierarchies wherein what ego thinks alter thinks matches what alter thinks, and what ego thinks alter thinks ego thinks matches what ego thinks.
We model the knowledge and belief systems involved in the phenomenology of trust and argue that trust in a social network can be understood as a set of interactive beliefs that the trustful and the trusted share about each other’s propensity to register and report relevant and truthful information to one another. Prototypical of such relevant information are propositions that represent the backbone of coordinative, communicative, and collaborative activities. What matters to coordination and co-mobilization in a wide range of contexts is the coherence of agents’ thinking about other agents’ thinking (level 2 beliefs) and agents’ thinking about what others think they think (level 3 beliefs) about issues that are important to the interaction.
We argue that trust in competence and trust in integrity safeguard coordination and cooperation among networked agents, and therefore that they should correlate with coherent hierarchies of higher-level beliefs, which are required for successful coordination and co-mobilization. We marshal evidence that is consistent with this argument.
After motivating our approach to representing trust, we extend our epistemic description language (EDL) to describing knowledge states of agents in a social network (their “epistemic states”) and show how trust can be defined as an epistemic structure that depends on agents’ level 2 and level 3 beliefs. Extending the EDL in this way enables us to analyze trust as a modal epistemic state in which agents have coherent beliefs about other agents’ proclivity to know the truth (trust in competence) and to speak it (trust in integrity). We show how trust structures form the backbone of epistemically coherent subnetworks, and we use this language and the associated theory of trust to empirically investigate the interactive structure of trust among agents comprising a large intra-organizational network.
Finally, we extend our analysis to show how social networks can be analyzed in terms of “trust neighborhoods,” “trust corridors,” and “trust conduits” that form a network’s preferred conduits for sensitive information. We contribute a method of conceptualizing trust that allows researchers to detect and measure its presence and degree predictively (by measuring what agents think and what they think other agents think or know) rather than descriptively (by observing instances of trusting-trustful behavior). We also create a path for linking trust directly with the ability and tendency of agents in a network to mobilize and coordinate—important precursors for cooperation and collaboration.
Trust as an Interactive Epistemic State
We now turn to the development of an epistemically precise characterization of trust. Our characterization uses modal epistemic structures such as the one described in Chapter 2 to define confidence. Thus, trust can be understood as a form of confidence. We decompose it into two separate entities, trust in integrity (agents’ propensity to say no more and no less than what is true) and trust in competence (agents’ propensity to know no more and no less than what is true):
Trust in Integrity (Strong Form) (TI). “A knows that iff B knew P, B would assert P”: Ak(BkP ↔ BaP), where BaP denotes B acts on P or B asserts P, and P is a proposition relevant to the interaction between A and B. If A trusts B’s integrity, then A knows that iff B knows the truth, then B will assert the truth. In this form, (1) B’s knowledge of P is sufficient to guarantee to A that B will say P and (2) if P is false, then B will not say it; that is, B speaks the truth and nothing but the truth.
This definition of trust generates the following interactive belief hierarchy between A and B: Suppose that P = B does not trust A. Assume that this proposition is known to B, unknown to A, and (evidently) relevant to the interaction between A and B. If it is true that A trusts B, then A knows that iff B knows that P is true, then B will assert it and therefore will inform A of his mistrust. Because of the biconditional (iff) nature of the relationship between knowledge and expression, A can infer from the fact that B does not say “I don’t trust you” the fact that B actually trusts A. The argument is symmetric for A. Thus, trust in integrity as we have defined it generates an interactive belief hierarchy between the parties to the trusting relationship.
Our definition of trust in integrity (TI) sidesteps two complications. The first is that B must be aware that he knows P in order to say P given that he knows it—that is, BK 2P. The second is that the biconditional (↔) relating B’s knowing P to his asserting P is plausibly relaxed to the simple conditional, such that B will assert what he knows but will not necessarily know all that he asserts. These complications can be repaired, respectively, by the following relaxations of TI.
Trust in Integrity (Weak Form 1) ( TIwf1). “A knows that iff B were level 2 aware of P, then B would assert P”: Ak(Bk2P ↔ BaP). This weak form of trust in integrity requires that B’s awareness of P be independently ascertained by A: A’s knowledge of B’s knowledge of P is not sufficient to warrant A’s knowledge that B will assert P. Thus, if B has reason to know P and A knows it and weak-form-trusts B, then B’s failure to bring up P is forgivable in the sense that A can continue to trust B in the weak sense, provided that A considers it possible that B temporarily overlooked or forgot P. This weaker form of trust incorporates imperfect recall, which has an important explanatory function in some interactive analyses of games (Aumann 2001).
Trust in Integrity (Weak Form 2) ( TIwf2). “A knows that if B knew P, B would say P or act as if P is true”: Ak(BkP → BaP), where BaP denotes B acts on P or B asserts P. If A trusts B’s integrity in this weak sense, then A knows that if B knows the truth, then B will assert the truth, but not necessarily that all B would assert is the truth. B, in other words, is “trusted” by A to assert the truth but not to assert “nothing but the truth.”
The second part of our definition of trust refers to trust in competence. A may trust B’s intentions but not necessarily B’s ability to deliver, perform, or make good on them. In the way we have defined trust, A expects B to say only what B knows to be true but not to know what is true. Thus, we extend our definition of trust as follows.
Trust in Competence ( TC). “A knows that iff P were true, B would know P”: Ak(P ∈T ↔ BkP). If A trusts in the competence of B, then A trusts that B is a faithful register of true propositions and not a register of false propositions—that is, that B registers “the whole truth.”
Now we are in a position to characterize trust as the combination of trust in integrity and trust in competence. In its strong form, trust combines the strong form of trust in integrity (which we mean unless we signal otherwise) with trust in competence.
Trust Tout Court (T). “A trusts in both the competence and the integrity of B”: ATB → (ATI B & ATC B). If ATB, then A knows that B would assert the truth, the whole truth, and nothing but the truth.
Weaker forms of trust can be created by combining the trust-incompetence condition with the weaker forms of the trust-in-integrity condition, permitting us to create “trust hierarchies” and, rather than declare that trust exists or does not within a relationship, to become far more precise regarding the kind of trust that exists between two or more agents. Moreover, for any or all of a collection of agents A, B, and C, the relation T has the following properties.
Binariness B. ∀ A, B: ATB or ~ATB. “For any A and B, A either trusts or does not trust B.”
Proof. We focus first on TI. Suppose that ATI B and ~ATI B—that is, Ak(BkP ↔ BaP) & Ak(BkP ↔ ~BaP). But this contradicts the binariness property of the k relation (a). The same argument applies for TC and therefore for the conjunction T = TI & TC.
Transitivity T. ∀ A, B, C: ATB & BTC → ATC. “For any A, B, C, if A trusts B and B trusts C, then A trusts C.”
Proof. If ATI B, then Ak (BkP ↔ BaP). If BTI C, then Bk(CkP ↔ CaP). Let P = CkR ↔ CaR for some R. Clearly, BkP and, ex hypothesis, BaP. Because A knows that B will only say what B knows (to be true) and knows to be true only those sentences that are in fact true, AkP and therefore ATIC. Now let Q = R ∈T ↔ CkR. Clearly, BkQ and, given ATI B, Ak(Q ∈T). Therefore, ATC C and therefore ATC C and ATI C—that is, ATC.
Does the fact that A trusts B entail the fact that B trusts A? The answer is no, and the proof is by counterexample. Suppose that A trusts B to assert the truth, the whole truth, and nothing but the truth but that B does not trust A to do so. Let P represent the proposition B does not trust A to assert the truth, the whole truth, and nothing but the truth. Ex hypothesis, P is true. Since A trusts B, B will assert P, and A will know P and therefore will know that B does not trust A. So, if A trusts B and B does not trust A, then our definition requires A to know that B does not trust A. However, in order for A not to trust B as a result of this knowledge, it is necessary to make A’s trust in B conditional on her knowledge of B’s trust in her, which does not necessarily have to be the case: A may trust B because of the “kind of person A believes B is” rather than because of a set of expectations about B’s behavior given a set of incentives and a purely self-maximizing disposition.
Indeed, if we introduce a self-maximizing interpretation for the foundation of trust (a mutual expectation of cooperative behavior in situations with payoffs captured by the prisoner’s dilemma game played between self-interested individuals (e.g., Axelrod 1997)), then it is the case that A cannot trust B coherently while knowing that B does not trust A. The proof is simple and follows directly from applying the definition of trust to the epistemic state space of the prisoner’s dilemma game.
Epinets, Trust, and Coordination: A Study of Senior Management Networks
Our definition of trust is clearly too broad if it is applied to all propositions that are possibly true: A cannot reasonably know or expect that B knows and asserts all true sentences. The range of admissible propositions, then, should be restricted (just as in our definition of knowledge) to propositions that are relevant to the interaction between A and B. Coherence of higher-order beliefs among networked agents is a condition for successful coordination and mobilization; propositions that are the focal points of coordination and mobilization scenarios are thus particularly interesting to our model of trust: these are the propositions that are likely “relevant” to both A and B.
Interactive belief hierarchies are interesting even if we do not go “all the way to common knowledge” and focus on level 2 and level 3 beliefs, as we illustrated in Chapter 2. In spite of the fact that full common knowledge cannot be achieved through non-face-to-face communication (as in Rubinstein’s (1989) e-mail game), most human agents infer common knowledge from level 2 knowledge when it comes to confirming a meeting by e-mail: they do not confirm confirmations or confirm confirmations of the confirmations, but nonetheless wait for confirmation before assuming that a meeting will take place at the proposed time and place.
Nov and Rafaeli (2009) showed that, in organizational contexts, individuals attach a premium to mutual knowledge (although mistakenly referring to it as common knowledge) relative to shared knowledge. This highlights the value individuals place on the interactive component of interactive belief hierarchies in coordination-intensive activities. Culture, whether social or organizational, may thus, as Kreps (1990) and Chwe (2000) explained, represent an interactively coherent body of shared beliefs: these may be either common knowledge or almost-common knowledge among the group’s members.
Therefore, it seems reasonable, as we argued in Chapter 3, to focus on the coherence of interactive beliefs—“Does what I believe match what you believe I believe?” “Does what I believe you believe I believe match what I believe?”—as indicative of the mobilizability and coordinatability of networked agents and so indicative of a significant element of the value of network closure (as defined in the social networks literature rather than in the epistemic sense advanced in Chapter 2). If almost-common knowledge crucially serves to enable coordination on interdependent activities, and if we assume that individuals are more likely to engage in coordinated action with others whom they trust than with others whom they do not trust, then we should be able to observe a link between trust and coherence of interactive beliefs. If trust engenders hierarchies of coherent level 2 and level 3 beliefs, then we should observe a high correlation between coherent belief hierarchies and trust. Moreover, the correlation may be stronger than that between the coherence of belief hierarchies and, for instance, the centrality of linked agents or the number of shared ties among agents’ alters within the network, measured by their constraint.
It is often assumed that network centrality brings with it an enhanced ability to mobilize or coordinate the network. However, as the analysis of the faculty networks presented in Chapter 3 illustrated, this need not be the case. Whatever the source of this enhanced ability, the arguments we advance indicate that it must possess an epistemic component, as mobilization and coordination are network processes that rest on level 2 and level 3 almost-common-knowledge conditions: Mobilizing agents must have accurate information about what other agents know, whereas coordinating agents must have accurate information both about what other agents know and about what other agents know they know.
Density of relationships, too, is widely held to influence the knowledge distribution and thus the mobilizability and coordinatability of a network. Chwe (1999, 2000), for example, took cliques (fully connected subnetworks) to be synonymous with pockets of local common knowledge. But, again, they need not be, as was also shown by the analysis of the faculty networks in Chapter 3. Indeed, one may communicate with several others who are all communicating with each other (i.e., one may be part of a clique), yet the mobilizability of each agent may not come to be common knowledge among clique members. Nor, as we have shown, is full-common knowledge necessary in many mobilization scenarios; rather, mutual knowledge of co-mobilization is sufficient.
Our epistemic definition of trust allows us to investigate trust relationships empirically and noncircularly: empirically by uncovering what agents believe about each other’s propensity and know and speak the truth to each other; noncircularly by using the definitions of trust in competence and trust in integrity to probe the epistemic and logical structures of trust without ever asking, “Whom do you trust?”
Senior Management Networks. To examine these insights empirically, we collected network and epistemic data from senior managers of a large multi-divisional, multiregional Canadian telecommunications firm. The data were collected in two phases. The first phase gathered network and biographical data via an online survey. Respondents were invited to participate through a personalized e-mail linking to the survey, which asked them to identify their significant work relationships without restriction on number.
To ensure accuracy (e.g., consistency in spelling; distinguishing people with similar or identical names), when a colleague’s name was entered in the survey, it was checked against a database containing the names and titles of all firm employees, and a list of suggested names and titles was presented to respondent from which s/he could select the desired individual. The respondent was then asked to rate the strength of each relationship from 1 (very weak) to 5 (very strong). In addition to work relationships, each respondent indicated his/her geographic location, divisional affiliation, and gender.
Of the 633 managers invited to participate in the survey, 593 (93.7 percent) completed it. The high response rate was the result of strong top management support and individualized e-mail reminders from the researchers.1 We used these network data to compute each respondent’s betweenness centrality, which measures the extent to which a respondent lies on the shortest paths between others in the network,2 and constraint, which measures the cohesiveness of the respondent’s relationships.3 A respondent’s network is constraining to the extent that it is directly or indirectly concentrated in a single contact; more constraint implies fewer bridging ties spanning structural holes.
The second phase of data collection took place during a series of voluntary workshops to which the 593 individuals who completed the initial survey were invited to receive feedback on their networks. Each attendee received a second personalized survey to complete. To assess level 1 beliefs, the survey asked respondents to indicate what they believed their firm’s success depended on most: “innovation,” “focus,” or “marketing.”4 Respondents answered the same question for each alter (to a maximum of 10) with whom they had earlier rated their relationship strength as either 4 or 5 (on the 5-point scale).
We focused on propositions about drivers of the firm’s success because they are relevant to both the successful coordination and the co-mobilization of the respondents: they can be thought of as focal points in a game in which managers try to coordinate their actions around a set of accepted goals. To the extent that these goals are common or almost-common knowledge, they form an important part of the “culture” of the organization (Kreps 1990).
Next, respondents rated, on a scale from 1 (definitely not) to 7 (definitely), the strength of their level 2 belief that they knew what each alter believed was the main driver of the firm’s success, and their level 3 belief that each alter also knew how they had responded.
Finally, to assess trust in competence and trust in integrity, the respondents were asked to rate, on a 7-point scale, whether they could count on each alter “to be ‘in the know’ and to communicate what s/he knows.”5 A total of 302 responses were received from workshop attendees, of which 296 were usable for the analysis. Combining the network and epistemic survey data yielded 608 significant work relationships among the survey respondents, of which 113 were reciprocated at the required cut-off level (≥4 on a 5-point scale of relationship strength) for both respondents.6
Centrality, Trust, and Coherence. Table 4.1 presents descriptive statistics and bivariate correlations among the study variables for the sample of 608 significant work relationships. Of particular interest are correlations among respondent i’s trust in competence and trust in integrity of alter j; level 2 and level 3 beliefs regarding alter j; knowledge of alter j’s level 1 belief about the basis of the firm’s success (variable ij matched; coded 1 if correct and 0 otherwise); and measures of respondent i’s betweenness centrality and constraint. The correlations are strong and positive among the strength of trust and level 2 and level 3 beliefs, as well as for knowledge of alters’ level 1 beliefs regarding the basis of firm success. Betweenness centrality and constraint are weakly correlated with these variables, however, and with one exception not significantly different from zero.
TABLE 4.1 Descriptive Statistics and Correlations for All Relationships
SOURCE: Moldoveanu and Baum: “I Think You Think I Think You’re Lying”: Interactive Epistemology of Trust in Social Networks. Management Science 57(2), 2011, pp. 393–412. Copyright 2011, Institute for Operations Research and the Management Sciences, 7240 Parkway Drive, Suite 300, Hanover, MD 21076 USA. Reprinted with permission.
NOTE: N = 608; correlations > .08 significant at p < .05.
Table 4.2 reports ordinary least squares (OLS) regression estimates for a set of multivariate models of trust.7 These models estimate each variable’s independent association with trust, net of a set of control variables that may materially affect respondents’ trust in alters, including respondent and alter gender and tenure with the firm; whether the respondent and alter are the same gender; and whether the respondent and alter work in the same location and/or company division. It is important to control for these respondent and alter characteristics because our theoretical variables, including the specific level 2 and level 3 beliefs we studied, may be correlated with other general and more specific types of information that the respondents may possess about each other.
The multivariate regression estimates confirm the correlational analysis. Net of the control variables, coefficients for both level 2 and level 3 beliefs are significant and positive, while the network variables are not. Notably, common work location and alter tenure are also positively related to respondents’ trust in competence and trust in integrity of alters.
Table 4.3 reports multivariate logit regression models estimating whether or not respondents’ were correct regarding alter j’s level 1 beliefs about the basis of the firm’s success (ij matched). These models estimate the independent associations of respondents’ trust in alter j; level 2 and level 3 beliefs about alter j; and network positions, with respondents’ correct identification of alter j’s level 1 beliefs, controlling for respondent and respondent and alter demographic characteristics.8 In the models, coefficients for both trust and level 2 beliefs are significant and positive, while coefficients for level 3 beliefs are not.
Notably, in the full model, consistent with the idea that network centrality and density influence knowledge distribution (e.g., Coleman 1988, 1990; Burt 1992), respondents whose network positions are more central and constrained are more likely to be knowledgeable about their alters’ level 1 beliefs. Common work division is also positively related to knowledge of alters’ beliefs, which is sensible in light of the beliefs assessed.
Reciprocated Ties. So far, we have examined significant work relationships regardless of whether or not they are reciprocated. We now turn to the 113 relationships that are reciprocated to present a more fine-grained, dyadic analysis of the effects of trust and belief strength.
Table 4.4 presents descriptive statistics and bivariate correlations among the variables for the sample of reciprocated relationships. Of particular interest are respondents’ trust in competence and trust in integrity of, and level 2 and level 3 beliefs about, each other and their network characteristics. As before, the correlations are strong and positive for respondents’ trust in and level 2 and level 3 beliefs about alters. The correlation between respondents’ trust in each other is also strong and positive. The correlations between respondents’ level 2 and level 3 beliefs and their trust in each other are positive, although more weakly and significant only in the case of level 3 beliefs. Three of four correlations among respondents’ level 2 and level 3 beliefs are positive and significant as well.
TABLE 4.2 Ordinary Least Squares Regression Models of Trust
SOURCE: Moldoveanu and Baum: “I Think You Think I Think You’re Lying?: Interactive Epistemology of Trust in Social Networks. Management Science 57(2), 2011, pp. 393–412. Copyright 2011, Institute for Operations Research and the Management Sciences, 7240 Parkway Drive, Suite 300, Hanover, MD 21076 USA. Reprinted with permission.
NOTE: N = 608.
†p < .10; *p < .05; **p < .01; ***p < .001.
TABLE 4.3 Logit Regression Models of Knowledge of Alters’ Level 1 Beliefs
SOURCE: Moldoveanu and Baum: “I Think You Think I Think You’re Lying”: Interactive Epistemology of Trust in Social Networks. Management Science 57(2), 2011, pp. 393–412. Copyright 2011, Institute for Operations Research and the Management Sciences, 7240 Parkway Drive, Suite 300, Hanover, MD 21076 USA. Reprinted with permission.
NOTE: N = 608.
†p < .10; *p < .05; **p < .01; ***p < .001.
TABLE 4.4 Descriptive Statistics and Correlations for Reciprocal Relationships
SOURCE: Moldoveanu and Baum: “I Think You Think I Think You’re Lying”: Interactive Epistemology of Trust in Social Networks. Management Science 57(2), 2011, pp. 393–412. Copyright 2011, Institute for Operations Research and the Management Sciences, 7240 Parkway Drive, Suite 300, Hanover, MD 21076 USA. Reprinted with permission.
NOTE: N = 113; correlations > .17 significant at p < .05.
Among the network measures, a respondent’s constraint is positively correlated with his/her own level 2 belief strength, consistent with the idea that the density of relationships promotes interpersonal cohesion and formation of norms (Coleman 1988, 1990). More strikingly, however, a respondent’s strength of trust in, and level 2 and level 3 beliefs about, alter j are negatively and significantly correlated with alter j’s betweenness centrality. These negative correlations resonate with the idea that network centrality fosters a competitive orientation among agents as they attempt to take advantage of opportunities for information brokerage and control to increase their autonomy and others’ dependence on them (Burt 1992; Moldoveanu, Baum, and Rowley 2003).
Table 4.5 presents a multivariate analysis of trust in the reciprocated relationships. Because errors are likely to be correlated across equations estimating the trust of respondent i on alter j and of alter j on respondent i using the same data, we estimated these equations using a seemingly unrelated regression (SUR) model, which allows correlated errors between equations (Greene 2000). The multivariate analysis again confirms the correlation analysis. Consistent with those in Table 4.3, estimates based on reciprocated relationships indicate that the respondents’ level 2 beliefs about alters are positively associated with their trust in them, while their level 3 beliefs are not. Estimates also show that respondents’ independently reported trust in each other is positively related.
Moreover, respondents’ network constraint is positively correlated with their level 2 belief strength. The estimates additionally indicate that respondents’ trust in alters is negatively related to alter betweenness and positively related to alter constraint. Again, these findings are consistent with the idea that dense relationships promote interpersonal cohesion and norms of cooperation (Coleman 1988, 1990), while central network positions create opportunities and incentives for information brokerage and control (Burt 1992; Moldoveanu, Baum, and Rowley 2003). Among the control variables, it is notable, and somewhat ironic, that more senior respondents expressed more trust in alters while simultaneously being less trusted.
Taken together, the results of the foregoing analysis corroborate the idea that trust is associated with deeper hierarchies of coherent level 2 and level 3 beliefs. The analysis reveals not only a strong covariation among respondents’ level 2 and level 3 beliefs about alters and their trust in competence and trust in integrity of alters; it also reveals a strong covariation of their knowledge of alters’ level 1 beliefs. In contrast, while respondents’ betweenness centrality and constraint are positively associated with their knowledge of alter j’s level 1 beliefs, they are either unrelated or negatively related with their trust in alters. Thus, within the large telecommunications firm examined here, centrality is not necessarily indicative of relative informedness (especially when higher-level knowledge is included in what counts as information) and the enhanced ability to mobilize or coordinate the network that accompanies it; nor is agent constraint necessarily indicative of common knowledge. Moreover, when high constraint in networks results from dense connections among agents (rather than connections through a single individual, as in a hierarchy), it ought to be positively related to trust (Burt 1992). Although constraint derives primarily from density in our empirical setting, this is generally not what we find. The epinet model thus does a better job of explaining trust—and predicting it ex ante—than do intuitive and widely accepted structural models.
How Trust Partitions and Propagates Information in Social Networks
Trust and security are vital to information diffusion, verification, and authentication in social networks and to a precise explanation of the network position advantage that accrues to certain agents. The effective brokerage of information across a structural hole is dependent on the trustworthiness of the broker to both the transmitter and the recipient of that information. Thus, if trust safeguards coordination and co-mobilization, it is reasonable to assume that it also safeguards coordination-intensive tasks such as truthful communication and therefore the spread of useful and veridical information within a network.
Communicating critical information is a coordination-intensive task because it takes place in the background of shared assumptions and orientations whose commonality is important to the accurate receipt of the information in question: “The CEO told me we are moving away from vertical markets” may be highly useful information to someone who knows the context of the conversation in which it emerged and quite useless information to someone who does not know it. In this section, we show how the description language we have introduced can be used to model the flow and authentication of information in a social network, and we pave the way to empirical analyses of informational dynamics in networks.
Trust Partitions. Epinets allow us to state how and why trust and security matter by making it possible to state sufficient epistemic conditions for the knowledge that trusting, trusted, secure, and authenticated networked agents fulfill. In particular, we define the following.
Trust Neighborhood (NT i(G), NTb(G)). A fully linked subnetwork of G (clique) that shares trust in mutual competence and integrity (weak form). A trust neighborhood is a good model for a network of close ties where agents can rely on one another for truthful and truth-like knowledge sharing, conditional on awareness: if an agent is aware of P (knows it and knows she knows it), then she will share P with others. Thus, in a trust neighborhood communication is truthful but what is being communicated may not be “the whole truth.” Trust neighborhoods can represent referral cliques, often used to get the “inside story” on people and organizations that have a history of interactions within an industry.
A referral clique is a clique of agents in which information flows are “trustworthy” in the technical senses of trust introduced earlier. Within a trust neighborhood (a “circle of trust”), sensitive information is likely to flow more reliably and accurately than it does outside of it. Our epistemic representation of trust allows us to map the trust neighborhoods within a network precisely and therefore to make predictions about the reliable spread of accurate information within the broader network of contacts.
Security Neighborhood (NS(G)). A fully linked subnetwork of G (clique) that shares trust in mutual competence and integrity (strong form). Security neighborhoods are high-trust cliques. They may be good representations for networks of field operatives in law enforcement—where authentication of communicated information is crucially important to the payoff for each agent—or for conspiratorial networks—such as a subgroup of directors trying to oust the company’s CEO. The reason for calling such a network a security neighborhood is that it has (common) knowledge of itself as a trust neighborhood and thus possesses an important authentication mechanism for communication, which trust neighborhoods based on weak-form trust do not possess.
If A and B belong to the same trust neighborhood, then C (a third party unknown to A but known to B) can interject herself in the communication between A and B and contribute information to them. If the fact that C is part of the trust neighborhood is not itself common knowledge, then the information that C contributes is not authenticated in the sense that it is not possible for A to decide whether or not to trust information coming from C without consulting B. If, on the other hand, A and B are part of a security neighborhood, then C will immediately be recognized as an outsider. Common knowledge of clique membership is a key element in authentication, which is itself important in subnetworks concerned with infiltration from the outside. A key property of a security neighborhood is thus that it necessarily has common knowledge of the fact that it is a security neighborhood as well as common knowledge of any fact that is relevant to it:
Proposition. A security neighborhood is a common knowledge neighborhood.
Proof. Consider a dyad (A, B) where ATB and BTA, both in the strong sense. For any P, it is the case that Ak(P = true ↔ BaP) and Bk(P = true ↔ BaP). For instance, let P = ATB. Since P is true and Ak(P = true ↔ BaP), Ba(ATB) is true. Now, suppose that P is true but there is some level of almost-common knowledge, n, at which it is not the case that (AkBk)nP—that is, ~(AkBk)nP. Then, at the almost-common knowledge level, n − 1, it cannot be the case that (AkBk)n−1P. To see this, let P n−1 represent the proposition (AkBk) n−1P, which is true and, together with ATB, implies (AkBk)nP. Therefore, if P is true, ATB and BTA, P is common knowledge.
Trust and Information Pathways. If “trust matters”—and if it matters crucially for the way in which an agent conveys information to an alter—then classifying network ties in terms of the degree of trust that the linked agents share should allow us to make progress in understanding the dynamics of critical information in a network. Concatenating trustful-trusting relationships, we can define the most likely, the fastest, or the most reliable paths by which relevant and truthful information flows in a social network.
Trust Conduit from Agent A to Agent J. Path A-B-C . . . J from A to J, passing through agents B . . . I such that A trusts B, B trusts C, C trusts D . . . I trusts J. Trust conduits can enable reliable knowledge flows in networks: information that comes from A is trusted by B, information that comes from B is trusted by C, and so forth, such that information flows credibly along a trust conduit. Trust conduits can represent knowledge pipes in organizations and markets, and thus they enable us to study the dynamics of new information propagation. They can also be used to effect a useful distinction between facts and rumors: facts are bits of information that have propagated along a trust conduit; rumors are bits of information that have not. Since facts can be used to check rumors, trust conduits not only enable speedy propagation of useful relevant information but also afford checks and constraints on rumor propagation. Rumors should thus die out more rapidly in networks seeded with many trust conduits than in those lacking them.
Trust Corridor. Because trust is generally not symmetric (particularly in its weak forms), we define a trust corridor as a two-way trust conduit. A trust corridor is useful for representing reliable bidirectional knowledge flows within a network, thus increasing the degrees of freedom associated with any particular flow. If reliable knowledge can flow in both directions in a knowledge pipeline, rumor verification can proceed more efficiently, as any one of the agents along the path of the corridor can use both upstream and downstream agents for verification.
Trust corridors may be a good representation for expert networks made up of agents who can verify relevant rumors that come to the attention of any one of them. They can thus be seen both to accelerate the reliable transmission of facts and to impede the promulgation of unverified or unverifiable rumors within a network. Figure 4.1 illustrates the trust neighborhood, conduit, and corridor concepts graphically. Figure 4.2 maps out these epistemic regimes for the telecommunications firm we studied. If trust safeguards effective communication of reliable and accurate information, then we expect relevant bits of information to propagate relatively faster and more efficiently within trust neighborhoods and along trust and security conduits.
Special network relationships are required for the propagation of “sensitive” information—information that senders wish to be assured will reach only and all intended recipients. In such situations, agents’ knowledge of the integrity of the trust conduit they access is required for the conduit to function as an information channel. The conditions for such “superconductive” conduits can be made more precise through the use of epinets in the following ways.
Figure 4.1 Trust Regimes
source: Moldoveanu and Baum: “I Think You Think I Think You’re Lying”: Interactive Epistemology of Trust in Social Networks. Management Science 57(2), 2011, pp. 393–412. Copyright 2011, Institute for Operations Research and the Management Sciences, 7240 Parkway Drive, Suite 300, Hanover, MD 21076 USA. Reprinted with permission.
Security Conduit from Agent A to Agent J. Path A-B-C . . . J, from A to J passing through agents B . . . I such that A trusts B, B trusts C, C trusts D . . . I trusts J and the conduit is common knowledge among (A . . . J). This feature renders such conduits robust against infiltration, as each user of the communication channel embodied in the conduit can be authenticated by any of the agents comprising it. Security conduits can be understood as authenticated knowledge pipes representative of operative agent networks (e.g., secret agent networks) in which rapid authentication of incoming messages can be performed quickly using agents’ (common) knowledge of each others’ conduit membership.
Figure 4.2 Trust Regimes within a Large Telecommunications Company
source: Moldoveanu and Baum: “I Think You Think I Think You’re Lying”: Interactive Epistemology of Trust in Social Networks. Management Science 57(2), 2011, pp. 393–412. Copyright 2011, Institute for Operations Research and the Management Sciences, 7240 Parkway Drive, Suite 300, Hanover, MD 21076 USA. Reprinted with permission.
Security Corridor. A security corridor is a two-way security conduit that represents subnetworks in which (1) rumor verification can be more rapid than in one-way knowledge-conducting structures, and (2) the probability of undesired “infiltration” is low because of the authentication mechanism available to “strong-form” trust-based subnetworks. Security corridors may also represent authenticated expert networks that are characteristic, for instance, of robust social referral networks, which cannot easily be infiltrated and can be used for both fast rumor verification and the robust circulation of reliable knowledge.
Taken together, the epistemic building blocks we have introduced allow researchers to study not only the impact of trust on coordination and mobilization but also the impact of trust and trust-related structures (neighborhoods, conduits, corridors; security neighborhoods, corridors, conduits) on the propagation of information within a network. Seen through this lens, trust appears to provide a mechanism for the informational superconductivity of social networks and offers an explanation for the differential propagation of information in organizations and markets. Trust ties are informational superconductors, and the subepinets that characterize trust and security neighborhoods can be thought of as resonant cavities in which sensitive information reverberates.
However, trust-mediated structures may exhibit epistemic quirks, or counterintuitive effects, that can be investigated using the precise description of epistemic states that epinets afford. We can, in particular, make predictions about the relative conductance of a trust conduit and the effects of perceptions about the relevance of a piece of information on that information’s dynamics. Suppose that Alina communicates “in confidence” a secret piece of information, S, to Belinda, whom she trusts and therefore believes that S is safeguarded by this trust. Belinda immediately lets Claire, whom Belinda trusts, in on S. Claire proceeds in a similar fashion, and within days S becomes shared knowledge within the organization (even though very few know that others know it).
Has Belinda broken Alina’s trust? And if so, where? The epistemic definition of trust we introduced suggests a simple answer: Belinda broke Alina’s trust when she did not tell Alina that she told Claire S, because the proposition I told Claire S is relevant and true and Belinda knows it to be relevant and true. However, if Belinda trusts Claire, then she can justify not telling Alina that now Claire knows S by reasoning that because Claire will keep the information to herself, S will not get any further within the organization, which is something that Alina would not consider relevant. Of course, if Claire reasons the same way about Belinda, S will freely propagate and become relevant as a result of the very way in which Belinda has defined relevance. This example highlights both the importance of trust conduits in the propagation of “secrets” and the potentially insidious local effects of trust in promoting the very kinds of behavior that it seems designed to preclude.
Trust can be built, but it can also be broken and, with it, the network epistemic structures that are held together by trust ties. Because we have defined trust noncircularly (in terms of epistemic states and subjective beliefs that can be tested), it should be possible, using the representational tool of epinets, to investigate breaches of trust (intentional and otherwise) and identify specific consequences of trust-making and trust-breaking moves to the resilience of social networks.
Trust and the Fragility and Robustness of Social Networks
Recent interest in the characterization and optimization of network robustness (e.g., Callaway et al. 2000; Albert and Barabási 2002; Paul et al. 2006) has focused on the survivability of certain topological and structural features of a network (connectedness, diameter, characteristic path length) in the face of “attacks” that either sever ties among networked agents or remove from the network one or more agents that are topologically significant (e.g., “hubs” in center-periphery networks). Such analyses have proceeded in a more or less “mechanical” fashion by deriving the evolution of networks in the face of various random and probabilistic attacks, but they have not considered the relationship between certain kinds of attacks and certain ties, or the relative disruptiveness of various attacks on different network structures.
To make clear what we have in mind, suppose that you are a “social terrorist” who wishes to maximally disrupt the smooth functioning of a group, organization, or institution and are aware of the importance of trust-based ties to network integrity and of network trust and security neighborhoods to the reliability and speed of information propagation within the organization. With this knowledge, you can design strategies to disrupt the precise set of trust-based ties that safeguard passage of information between two subnetworks that are connected only by one or a few “trust brokers.” These strategies can be targeted at undermining either trust in integrity or trust in competence through the introduction of noise and distortion in individual or group communications that make it difficult to ascertain the degree to which trust was broken or kept in any given interaction.
Alternatively, understanding the importance of common knowledge or common beliefs regarding the secureness of a security conduit or corridor, you can systematically introduce noise or doubt in the higher-level epinets that safeguard security or trust in order to undermine the integrity and cohesiveness of these trust-mediated structures. This can be done through communications that break up cohesive cliques and trust circles or that weaken the confidence of members of such subnetworks in the veracity or reliability of the information being communicated.
Based on such considerations, breaches and repairs of trust can illuminate the relative fragility and robustness of social networks in new ways. Because trust (in integrity and in competence) is defined as a relationship between agents’ epistemic states, epinets enable us to study not only its effects on higher-level epistemic states and on the flow of information in networks, but also the dynamics of trust-making and trust-breaking phenomena. The Marquis de La Rochefoucauld pointed out in Maxims (1912) that it is impossible to distinguish a person who behaves in a trustworthy fashion because “that is the kind of person that he is” and a person who behaves trustworthily for a (potentially very long) period of time in order to advantageously exploit the trust that he has slowly and meticulously built up.
For example, A’s trust in B creates opportunities for B to take advantage of A. In such a case, it is B’s consistent and long-running refusal to take such opportunities in situations in which doing so has negligible costs that gives the trust relationship its unusual force. Once again, the importance of counterfactual and subjunctive epistemic states reveals itself: “If B wants to take advantage of me—A may reason—she can”; the fact that she does not functions as a certifier for the ongoing trust that A places in B.
Trust is a delicate relationship because it takes only one counterexample to the general-form statement “If P were true, then B would assert P or act as if P is true” to refute the proposition and thus undermine the trust that A has in B, while it takes an infinite—and therefore infeasible—number of confirmations to prove it. It may be objected that this account of trust is too simple because it leaves out phenomena such as contrition and generosity, which are valuable under noisy conditions (Axelrod 1997). Considering the effects of epistemic structures in such cases is beyond what we attempt to accomplish here, but it is clear that higher-level epistemic states are relevant to a comprehensive investigation of these phenomena.
For instance, if A has reason to believe that B construes their interaction environment as noisy, then A can exploit the fact that B is likely to interpret A’s trust-breaking actions as noise. If B does not know this, then he is more likely to forgive A than if he knows that A has exploited the camouflaging properties of noise, in which case B may react even more punitively than he would in a noise-free environment in order to punish A’s (double) breach of trust.
Such observations make it possible for us to offer an endogenous account of structural hole formation (Burt 1992), insofar as we understand structural holes as “trust holes” (of integrity, competence, or both). Suppose that A trusts both the integrity (strong form) and competence of B and B trusts both the integrity and competence of C. Then A trusts the integrity and competence of C (in the strong sense) and the subnetwork A-B-C constitutes a security neighborhood. Now suppose that, per La Rochefoucauld, B has worked on obtaining A’s trust over time with the intention of breaking it at an opportune moment and that the opportune moment is the possibility of creating a trust hole between A and C that only B can bridge.
The key to B’s realization of his intention is the recognition that the knowledge operator (“knows P,” or kP) depends on the relevance of what is known to the aims of the knower(s). Thus, if C knows P (Cedar needles are green) but P is not relevant to A (an investment banker who has never seen a cedar tree), then the fact that C does not assert P cannot constitute a breach of A’s trust of C; otherwise, the trust relationship condition would be too strong because it would require the parties to constantly assert propositions that are true but irrelevant. Relevance is thus a condition that can be made as precise as the rest of the analysis so far.
Relevance. P is relevant to C, for instance, iff C’s knowledge of P will cause C to change her planned course of action in a particular case.
Armed with this analysis, B, our putative “insider” and trust breaker, can now exploit small and potentially transient asymmetries of information or insight in order to break off A’s trust in C and therefore create the A-C trust hole that only B can bridge. In particular, to create the hole B sets up a situation in which (1) C knows P but believes that P is not relevant to A; (2) P is relevant to A; (3) A knows C knows P; and (4) C does not assert P, which is deemed by A to contradict the proposition If P is true, A will assert P, which underlies her trust in C.
Using this blueprint for the undermining of trust, B can look for situations that allow him to exploit asymmetries of understanding between A and C, wherein A and C attach different degrees of relevance to the same proposition, P, which both nonetheless consider true. On finding such a proposition, B must then ascertain that A knows that C knows P and must rely on the mismatch in ascribed degrees of relevance to cause the breach of trust.
For instance: telecommunications industries (such as WiFi, a protocol for wireless local area networking based on the IEEE 802.11(a) standard, and WiMax, based on the IEEE 802.16(d–e) standards) rely on large, established chip manufacturers like Intel for system developer ecosystems that embed their chips into products. A system manufacturer with private information about a chip’s limitations can cultivate, say, Intel’s trust by informing it of those limitations and of new market opportunities. And it can undermine Intel’s trust in its other partners (such as software developers that target their applications to a chip’s specific capabilities) by communicating information that it privately knows (e.g., from other suppliers) to be relevant to Intel but that it also knows will not be conveyed to Intel by its other partners in a timely fashion (because of their less intense interaction with Intel and time constraints). Thus, relative differences in interaction intensity can be used to seed breaches of trust based on asymmetries of understanding as well as information.
More broadly, the kind of epistemic network analysis that we have been engaging in can be employed to model the relative fragility and robustness of (trusting) relationships. In particular, network fragility correlates with the degree to which noisy interaction environments are more likely to engender breaches of trust that result in the dissolution of ties. Conversely, network robustness correlates with the degree to which noisy interaction environments are unlikely to produce breaches of trust that result in the dissolution of network ties.
We have seen that noise in interpretation of an agent’s behavior by his/her alters creates the possibility for breaches of trust that are shielded by justifications ex post such as “I did not know it would hurt you.” However, our definition of trust provides for distinguishing trust in competence from trust in integrity and therefore distinguishing “You should have known it” from “You should have said so” rejoinders. We are, as a result, able to investigate the reparability of breaches of trust in competence and trust in integrity separately, and to investigate whether different forms of breach—in the presence of different kinds of noise—are more or less likely to fracture a tie and compromise a network.
To the extent that trust neighborhoods form the coordinative backbone of a social network—the agent subnetwork most likely to coordinate its actions and co-mobilize purposefully—and to the extent that this coordinative backbone is critical to the functioning and usefulness of the network as a whole—the survivability of the network in its entirety depends on the resilience of the trust-based ties that constitute its “circles of trust”—its trust and security neighborhoods. The question “How fragile is network X?” then, can be made more precise by focusing on the specific trust and security neighborhoods that it comprises and asking (1) “How dependent is the existence of these neighborhoods on the specific ties they contain?” and (2) “How fragile are the ties on which these neighborhoods critically depend in the face of attacks meant to undermine the trust that buttresses them (in its various forms)?” Network robustness analysis turns, in this case, on (1) the epistemic structure of a network’s coordinative backbone (its trust and security neighborhoods) and (2) the relative susceptibility of the network ties to attacks that may occur from either inside or outside.
One illuminating way to appreciate the fragility of a network’s trust and security neighborhood is to consider the labors of our “social terrorist” as she attempts to unravel the “social fabric” that holds a network together on the basis of understanding (1) the epistemic structure of trust-based ties, (2) the topology of the relevant trust-linked subnetworks, and (3) the importance of trust-based subnetworks to the functioning and integrity of the network as a whole. She can proceed as follows.
Undoing Trust-Based Ties. Suppose that dyadic trust takes the logical form we have set forth for trust in integrity or trust in competence. Trust turns, in this view, on the subjunctive belief held by A that B would both know what is relevant and would assert it (to A). Knowing this, our terrorist can provide A with evidence that will lead A to believe either that B does not know something that A knows to be true and relevant—undermining A’s trust in B’s competence—or provide A with evidence that will lead A to believe that B has not communicated a fact that B knows to be true and relevant—undermining A’s trust in B’s integrity.
Based on her understanding, our social terrorist realizes that a key variable in this situation is A’s own knowledge of what is true and relevant. If what is relevant changes quickly, then the truth value of various propositions spelling out relevant facts changes quickly as well since knowledge should “track” facts. If communications between A and B are less frequent than changes in what is relevant (or relevant propositions change their truth value), then our terrorist can undermine A’s trust in B’s competence by providing A with relevant information at a higher frequency, creating situations where A already knows what is true—and that A therefore believes B should know (but does not). Our terrorist can, of course, do this in the guise and camouflage of “being helpful” to A—which in fact she (partially) is. This strategy does not cast a shadow explicitly on A’s trust in B’s integrity unless A has reason to believe, at some point, that B knows something that is true and relevant and has not communicated it to A.
Knowing this, our terrorist can set up similar high-frequency communications with B, inform B of what is true and relevant (perhaps just after having informed A), and communicate to A that she has already conveyed this information to B. If the frequency of interactions between A and B is not high enough to track the changes in B’s epistemic states that A has been informed of, then situations will arise in which delays in B’s acknowledging or relaying to A the information received from our terrorist will function as evidence undermining A’s trust in B’s integrity. Our terrorist accomplishes this by not informing B that she has already informed A of the change in what is true or relevant, thus leaving B in a fog regarding the timing of the terrorist’s relay of information to A. Our terrorist uses the timing of communications, and uncertainties about who knew what and when, to undercut A’s trust in B.
Timing is one form of colored noise that our terrorist can introduce to undermine relational trust. It is well known that noise can “undo” cooperative outcomes more generally as a result of the camouflaging properties that the introduction of exogenous and potentially irrelevant information affords (Axelrod 1997). In the case of trust, however, noise need not be exogenous to the epistemic states of A and B. Rather, our social terrorist can introduce noise in communications between A and B by introducing and respecting new distinctions that confuse A regarding what B knows and says. For instance, to a network theorist, speaking of the centrality of agent X in network N is imprecise because there are many different forms of centrality: an agent with high betweenness centrality, for instance, may have altogether different powers of influence from those of an agent with a high degree or eigenvector centrality.
If A and B use a low-resolution language to communicate (for instance, they speak of centrality tout court), then our social terrorist can undermine A’s trust in B’s competence by training A to make distinctions that B does not (e.g., between degree and betweenness centrality) and then using the resulting asymmetry in understanding between A and B to weaken the credence that A places in B’s propensity to know and speak the truth about what is relevant. In a situation in which, say, increasing the betweenness centrality of an agent results in a decrease in his/her degree centrality, our social terrorist can use a statement such as “His centrality has increased,” uttered by B, to demonstrate to A that B in fact does not know (or has not said) what is true about what is relevant, because what is relevant has changed. Distinctions, then, judiciously made and enforced, can create asymmetries of understanding that undermine the subjunctive beliefs in competence and integrity that ground trust.
Unraveling Trust and Security Neighborhoods and Corridors. Armed with techniques for undoing trust-based ties, our social terrorist can further apply herself to unraveling trust and security neighborhoods, conduits, and corridors. If these topological structures are “superconductive” in the sense defined earlier, then they are the primary carriers of reliable and accurate information across a network. Moreover, their superconductive properties aid our terrorist in discovering them. The discovery process is far easier for trust neighborhoods than for security neighborhoods because in the former the agents lack common knowledge of which of them make up the neighborhood. Since the only way to access information flows in a trust neighborhood is to become a member of it, and it is easier to fool one person than to fool many people at the same time, our terrorist focuses her infiltration strategies on trust neighborhoods rather than on security neighborhoods.
Armed with knowledge of the trust neighborhood’s topology, our social terrorist can perform the usual analysis of “tie redundancy” and select the ties that are minimally redundant—such as bridging ties between multiply connected subnetworks, which function as “trust bridges.” The ability of a trust neighborhood to survive tie-level attacks, then, is closely connected to the redundancy of ties among any of the agents in the network. The prospect of a social terrorist, in this case, offers an alternative explanation for the importance of “closure,” which is quite distinct from the “mutual monitoring” function argued for previously (e.g., Coleman 1988; Burt and Knez 1995; Krackhardt 1999): robust networks survive, in this view, because they are less susceptible to malicious attacks aimed at undoing trust-based dyadic ties.
Undermining the Relational Fabric of Networks. To the extent that the effective functioning of a subnetwork depends on the integrity of trust and security neighborhoods and corridors, its survivability is critically linked to that of its associated trust-based structures. Not all subnetworks are equally dependent on a trust backbone, of course. Knowing this, our social terrorist can identify the parts of the network that are most dependent on the integrity of a trust backbone, and she can concentrate her efforts on them. But how does she discern the various subnetworks on the basis of their relative dependence on a trust backbone?
If, as we argue, the value of networks is critically dependent on cooperative and collaborative outcomes, and to the extent that these outcomes rely on coordinated action, and, furthermore, if trust correlates with the cohesion of higher-level epistemic states required for coordination, then our social terrorist can concentrate on the subnetworks that face the most challenging coordination problems. They are likely to be the ones that experience quickly changing environments that require rapid, reliable, and accurate communication of complicated, noise-susceptible knowledge structures. Under such conditions, our social terrorist can inflict “maximum damage” with the least effort—provided that she has targeted the right ties in the trust backbone of the coordination-sensitive (sub-)network.
Summary
We have argued in this chapter for an interactive, epistemic definition of trust and for the importance of trust, so defined, to cooperative behavior in social networks. Trust is a complex epistemic quantity that can be measured non-circularly once it has been specified in terms of agent-level states of belief and knowledge. Its impact on the propensity of social networks to co-mobilize and coordinate can be predicted. Various epistemic structures and information flow regimes that arise in social networks can be understood using the basic building blocks of trust in competence and trust in integrity.
We have shown how trust can be defined in terms of the epistemic states of networked agents, how that definition can be used to measure it, and how epistemically defined trust relationships can function as safeguards of network coordination and information flow. Because trust is defined precisely in terms of agent-level epistemic states, it is possible to both measure it noncircularly using standard empirical methods and to manipulate agent-level epistemic states and observe how doing so affects it.
The proposed definition also makes it possible to measure the effect of trust on the propagation of information, knowledge, and beliefs within a network and thus achieve a more textured understanding of the mechanisms by which bridging a structural hole bestows its advantages; moreover, our definition allows us to model the effects of agent actions that make, break, and repair trust by considering the specific changes in the epistemic states of the networked agents that these actions produce.