Chapter 2

An Epistemic Description Language for Social Interactions and Networks

We introduce a modeling language for representing the epistemic states of networked human agents at both individual and collective levels. The new “epistemic description language,” or “EDL,” has a graphical component (diagrams for describing relationships among agents and relevant propositions) and a syntactical component (an epistemic logic that allows us to track epistemic states of linked agents and their dynamics). We use the EDL to articulate both the relationships between individuals and those individuals’ beliefs as the building blocks of epinets—for example, “Alice knows (the cat is on the mat)”—and the basic relationships among individuals’ epistemic states as the elementary building blocks of interactive epistemic networks—for example, “Bob knows (Alice knows (the cat is on the mat)).” We show how epinets can be used to capture causally relevant states of social networks, and we argue for an epistemic description language for social networks and interactions.

. . .

The study of (human) epistemic states is not new to social theorizing. Thomas Schelling’s (1960) original insight—that mutually consistent belief sets are required as antecedent conditions for strategic or coordinative equilibrium-based explanations of social behavior—was developed by David Lewis (1969) into a framework for the analysis of conventions, which Robert Aumann (1976) then used to build an interactive epistemology for strategic interactions that yields sufficient conditions for game-theoretic explanations of social interactions (Brandenburger 1992; Aumann and Brandenburger 1995). “Interactive epistemology” is now a branch of game theory concerned with elucidating the role of epistemic structures as the glue of social networks and interactions. It asks questions like these: “What are the epistemic conditions under which two or more agents arrive at a Nash equilibrium in their strategic interaction?” (Answer: mutual knowledge of game structure, strategies, payoffs, and rationality for two agents; common knowledge of same for all N players.) and “What are the epistemic conditions under which two or more agents can exclude, through iterative reasoning, the dominated strategies in a strategic game?” (Answer: almost common knowledge of game structure, strategies, payoffs, and rationality.)

These questions are relevant to our endeavor; however, the formalism of interactive epistemology is based on a particular set of modeling choices that researchers committed to the quasi-realism inherent in much social network analysis find empirically problematic and pragmatically uncomfortable. Relationally interesting and relevant epistemic states such as awareness (knowledge of one’s own knowledge of X), oblivion (lack of knowledge of X and lack of knowledge about that lack) and subconsciousness (knowledge of X without awareness of that knowledge) are largely uncharted in this framework. Whatever we have now that might qualify as an epistemic theory of relationality can be significantly improved on, albeit at the cost of introducing a more complex modeling language. What, then, are the objectives that such a language should aim to fulfill?

Desiderata for an Epistemic Description Language and an Epistemic Theory of Networks

What we are after is a modeling framework that maximizes the following objectives.

Precision and Subtlety. Unlike the standard economic modeling approach, wherein parsimony is sought through the enforcement of standard probabilistic representations of epistemic states (in which degrees of belief and structural uncertainty are represented by probability distributions and in which strategic uncertainty is represented by conjectures about other probability distributions (Harsanyi 1968a, 1968b; Mertens and Zamir 1985)), our EDL must be able to represent noninteractive epistemic states of individual agents such as awareness, oblivion, and subconsciousness, as well as more conventional epistemic states of knowledge and ignorance. We must also be able to represent the interactive epistemic states of networks and subnetworks in a way that is consistent with the representation of the self-referential epistemic states of individual agents.

The reason for these requirements is that, unlike economic analysis, which has been the source of theorizing about epistemic states in formalized games, network analysis originates in the realist epistemological tradition of sociology, where the interest is in representing epistemic states that closely track perceived ontological or conceptual differences. We aspire to an epistemic theory of networks that fits the epistemological tradition of sociology or artificial intelligence, even as it aims for a logical structure more commonly associated with economic theorizing that rests on an instrumentalist epistemology.

Operationalizability. Related to the requirement of representational precision is the requirement that the constructs that make up the EDL be operationalizable. Thus, we need to be able to demonstrate that trust—defined parsimoniously by some network researchers as a mutual expectation of cooperation (e.g., Burt and Knez 1995)—can be understood as a precisely specified epistemic structure that can be independently measured in a noncircular fashion—that is, not by questionnaires that merely ask, “Do you trust X?” or “Whom do you trust?” but by inquiry into conditions causally linked to trust. Similarly, the EDL we introduce should make it possible to give precise, tractable meanings to such terms as fame and glory, which are important to social network theorists interested in status (e.g., Benjamin and Podolny 1999) but remain under-analyzed.

Logical Auditability. Arguably, logical auditability is a vital aim of any formal model or theory (Saloner 1991; Moldoveanu 2002) and a potent reason for engaging in formal modeling in the first place (Oxley, Rivkin, and Ryall 2010). For a model of interactions to be logically auditable, it must allow us to track how changes in individuals’ beliefs and knowledge affect others’ beliefs and knowledge on the basis of the logical structures we use to represent these states. For instance, if A thinks B thinks that today is Tuesday when in fact it is Wednesday, and B finds out that he is wrong and communicates this to A, who trusts in B’s competence, then we should be able to track the change in A’s and B’s beliefs following the flow of information, all the way from the experience that allowed B to discover his error to the modification of A’s belief about B’s belief.

Our EDL and theory of epistemic networks must also make it possible to check for logical consistency between hypotheses regarding the epistemic states of networked agents and axiomatic assumptions linking these individual states to interactive epistemic states of subnetworks of agents and the kinds of network phenomena we are likely to observe. Such a theory should allow us to derive sufficient epistemic conditions for certain types of network behaviors (e.g., mobilization, exchange, and formation of trust conduits or structural holes), as well as to derive hypotheses about these network phenomena that are theorems of the basic axioms of our epistemic theory and syntactically correct constructions in the basic EDL we advance.

Fit with Other Network Agent Models. An important characteristic of any new theory is the interfaces it allows its users for communicating with developers of other theories and models. Specifically, it is important for the theory to offer a sufficiently broad conceptual space in which findings from other models of network behavior can be included. Our epistemic theory of networks and accompanying EDL must permit the incorporation of psychological biases that affect how agents process information in the form of operators working on agents’ epistemic states. Simplification biases, for instance, can be incorporated via contraction operators acting on first-order knowledge structures. Attribution biases can be incorporated via distortion or deletion operators acting on first- or higher-order knowledge structures. Our theory should also “dialogue” with game-theoretic approaches for investigating epistemic states and epistemic conditions for equilibria in games (Aumann 1987; Binmore and Brandenburger 1990; Aumann and Brandenburger 1995).

Fit with Empirical Approaches to Networks. The study of networks has evolved as an empirical craft as much as it has as a science or a set of theories: network researchers have specialized tools and techniques for representing, visualizing, and measuring variables of interest, along with a set of theories that function as prediction- and explanation-generating engines. Our EDL and epistemic theory of networks must be engineered both for a pragmatic fit with these tools of the trade and for a logical and conceptual fit with existing theories of network formation, structure, and dynamics.

Epinets are themselves networks. Therefore, standard network analysis tools can be employed to analyze links among agents and their epistemic states in ways that are already familiar to network researchers. For example, if {PJ} is a set of propositions that are mutually relevant to agents i and j, we can employ the basic strategy of representing networks as adjacency matrices to create N cognitive adjacency matrices around the beliefs {PJ}, with matrix {M1}k,l=1, . . . ,J defined by the relation mkl = 1 iff i knows Pk and j knows Pk and 0 otherwise; matrix {M2}k,l=1, . . . ,J defined by the relation mk2 = 1 iff i knows j knows Pk and j knows i knows Pk and 0 otherwise; and so on. In this way, we extend the adjacency matrix tool kit of standard network analysis to the epistemic realm.

An Epistemic Description Language

Our epistemic description language features a graphical tool that maps agents and what they believe or know, and a syntactical tool that allows us to analyze the content and structure of an epinet. The graphical tool is based on a set of building blocks, depicted in Figure 2.1, for intuitively representing what one or more agents know or believe. A node in an epinet is either an agent (Alice, Bob) or a proposition, which can be a well-formed formula of first-order logic (For all x, iff x = g, then Fx = Fg) or a well-formed sentence of a natural language (“There are no black swans on this pond”).

Links in epinets are directed: they travel from an agent (Alice) to a proposition (Today is Tuesday) and encode epistemic relations like “knows” and “believes.” In Figure 2.1, part (a) represents the epistemic state Alice knows P or, more compactly, AkP, and part (b) represents the epistemic state Alice believes P, or AbP. Epinets also allow us to represent intuitive epistemic states such as awareness, oblivion, and ignorance via directed ties linking Alice to the relevant proposition (P) and to her epistemic state about the proposition, as illustrated in Figure 2.1, parts (c)–(e) respectively.

FIGURE 2.1 Simple and Compound Epistemic States via Epinets

We can add both agents and propositions describing what agents might believe to epinets via additional nodes that represent them and further directed ties that link agents to propositions and agents’ epistemic states to one another. For instance, part (f) in Figure 2.1 shows an epinet comprising two agents and a proposition, where Alice knows P and knows Bob knows P, which he in fact does, even though he does not know that Alice knows P. Finally, epinets allow us to graph—and the accompanying syntax allows us to express—more subtle epistemic states such as confidence in one’s ability to know the truth. As shown in part (g) in the figure, Alice’s confidence in her beliefs about P, for instance, may rest on her knowledge that if P were true, then she would know it—call this proposition Q: Ak(PTAkP)—which establishes that her knowledge is truth tracking. Together, these building blocks allow us to construct epinets to represent in microstructure the epistemic glue of an interaction or relationship, and to build an analogous set of statements (using our epistemic syntax) to track and audit the epistemic structure and dynamics of a network.

Primitives and Kernels for an Epistemic Description Language

Now that we have an intuitive sense of the EDL, let us introduce, unpack, and justify some of its main components in greater detail, to explicate our choice of epistemic relations and the logical relationships among them.

Individual Epistemic States. We begin with a basic set of individual epistemic states, as follows.

Knowledge (K). A knows P: AkP, for some agent A and some proposition P. If agent A knows that, say, P = A judgment in favor of the defendant was entered today, then A will, ceteris paribus, act as if P is true in those cases in which (1) P is relevant and (2) A sees P as relevant (in which case we say that P is salient to A). k is a simple binary relation (either AkP or ~ [“not” in propositional logic] AkP) between an agent (or his mind) and a proposition P that is characterized by the following necessary conditions: (1) A believes P for a valid reason R, and (2) P is true. Note that these are not sufficient conditions for knowledge (Gettier 1963). A may believe P = There is a quarter in my pocket on the basis of the vague sensation of a weight in his left pocket, and, indeed, there is a quarter in A’s pocket but in his right one. In this case, A does not know P, even though he has a valid reason for a true belief. What counts is having the right reason for holding that belief.

A question may be raised at this point regarding our focus on propositions—subject-predicate sentences constructed according to syntactic rules that are more or less “self-evident,” rather than “states of the world,” as the proper arguments of knowledge states. Since Savage (1954), states of the world have been treated as the fundamental building blocks of what decision theorists call semantic state spaces (Aumann 1976, 1989, 1999a, 1999b). If we use semantic state spaces, we say that “A knows e, knows that fe”—that is, “Agent A knows that event e has come to pass, knows that event e will occur only if f occurs and therefore knows that event f has occurred.” Describing what an agent knows in terms of his or her degrees of belief regarding states of the world—rather than propositions referring to objects and events or propositions that are true or false as a function of certain states of the world—is attractive because it seems to eliminate the need to consider syntactic properties of the arguments, such as the grammatical form of propositions. Yet states of the world show up in our speaking about agents and their beliefs in ways that are categorized and expressed by propositions such as It is raining, This car is blue, You are here and now lying. Agreement or disagreement with such propositions is the “data” produced when we investigate agents’ states of knowledge via surveys and questionnaires. These propositions—not the underlying events to which they refer or seem to refer—are the most immediate phenomena with which we deal and which our models should explain. They are bound by rules of predication and syntax on the one hand and by criteria of verification or corroboration on the other.

It will not do, as some suggest (Aumann 1989; Samet 1990), to describe a state of the world as the set of all of the propositions that are true of or “at” it, because doing so ignores the problem of ascertaining whether or not propositions refer to the events about which they seem to speak. Think of all of the ways to describe the conditions under which It is raining today holds true, thus rendering intractable the enumeration of the propositional representation of a state—or the epistemic state of an agent who knows that state. The problem with Samet’s proposal is not only one of enumeration but also one of relevance: there is no way to delimit the range of true propositions warrantedly assertible about the set of conditions that constitute a rainy day that are also relevant to a particular situation or context and therefore properly uttered or believed by an agent in that context.

The problem of reference extends beyond relevance. Epistemic game theorists do not see a radical distinction between syntactic (propositions) and semantic (events and collections of events) approaches to modeling the arguments of agents’ knowledge (Aumann 1999a, 1999b). Their view appears to stem from an implicit belief that the truth value of a proposition fixes the reference to (or “meaning” of) the objects to which the proposition’s words refer. The word rain thus refers to the composite set of raw feelings and subjective experiences of “rain,” and this correspondence is (supposedly) fixed by a reference relation that is determined by the truth value of propositions like It is raining today. But this assumption is false: truth conditions of propositions involving referent terms do not fix the reference of those terms (Putnam 1981). There is thus a conceptual and a practical distance between syntactic and semantic approaches that cannot be bridged simply by mirroring the apparatus of set theory, which we use to represent and symbolically manipulate events, in the apparatus of first-order logic, which we use to represent and symbolically manipulate propositions.

We thus distinguish between the raw qualia—that is, instances of subjective conscious experience or perceptions that correspond to an event or object—and a proposition (a fact, say) that refers to that event or object. The distinction is critically important to our description language because the language does not work on state spaces of events whose interpretation is self-evident—like those of game theory and standard decision theory. The language works on propositions that are true or false, depending on both states of the world and characteristics of the observer (such as her perceptual apparatus and the language she uses to represent and communicate).

Pragmatically, probing the epistemic states of agents—through questionnaires or surveys, for example, or through elicitation of forced choices among lotteries with outcomes determined by the truth value of certain propositions—is based on the specification and enumeration of propositions that we expect to have well-defined truth values for the agents in question. Whether or not an agent’s state space contains It is raining today as a possible state depends on whether or not It is raining today is an intelligible proposition for that agent, where “intelligible” means “with a known set of truth conditions.” Because we have access to an agent’s personal state space only through propositions that describe either the space or the way in which the agent describes it, it makes sense for an EDL to be based on the set of propositions that the agent considers to have truth conditions, rather than on a set of events that refer to “raw” percepts but are only accessible or describable using propositions.

Consequently, we employ propositions that refer to events and objects rather than to states of the world as the arguments of the epistemic states of an agent, and we accept the challenge that different agents may propositionalize their raw experiences differently and in ways that are intersubjectively untranslatable because they use different languages or language systems to represent perceptions in propositional form. For instance, It is raining today is a proposition that represents a set of “raw perceptions”—cloudy skies, wet pavement, drops falling from the sky—that may be put into words differently by different agents. One person’s rain can be another’s drizzle and yet another’s mist. Agents need agree only on whether or not they disagree about the truth of a proposition, not on the right way to refer to an event via a particular one. This approach discards “states of the world” and “events” in favor of propositions, their truth values, and agents’ epistemic states regarding the content of the propositions and their truth values. Thus, an agent’s actions have consequences that are not the standard set of outcomes of decision theory and game theory (events constituted by subsets of states of the world) but are rather the result of changes in the truth value of certain propositions and changes in the epistemic states of agents about the truth value of these propositions and about each other’s epistemic states regarding them.

For example, the fact that Alice defects rather than cooperates in a one-shot prisoner’s dilemma game with Bob induces a change in the truth value of the proposition Alice defect(ed)—a change for Alice, provided she is aware of her own intentions and their link to her actions and their outcomes; a change for Bob, provided he makes the right inference from the observed outcome of Alice’s intentions; and a change for no one if Alice and Bob are unaware of or oblivious to each other’s behavior and/or intentions. Moreover, the truth value for Bob of Alice defected is not necessitated by his observation of the outcome of the game. Bob can put into language his observation of the outcome as “A suboptimal outcome was realized” without further attribution or interpretation. Whether or not the way Bob puts experiences into language influences how he remembers them is naturally a relevant and interesting topic of inquiry that is already pursued in cognitive psychology (Loftus 1979).

Awareness and Awareness of Level n (kn). The state “A knows P; A knows she knows P” is one that we refer to as awareness. “A knows she knows P, and so forth, to n levels”: AkP, Ak(AkP), Ak(Ak(AkP)), Ak(. . . AkP) . . .), abbreviated as AknP, generalizes awareness to a state in which A knows P, knows that she knows it, and so forth, ad infinitum. This epistemic state relates to an agent’s introspective insight into the contents of his/her own knowledge. Not all agents know they know what they know: they may have imperfect or unreliable recall, and such infelicities of introspection are clearly relevant in social interactions.

Level 2 awareness (i.e., Ak2P) denotes the situation in which an agent knows P and knows that he knows P. A may know P = Profits are nondecreasing in industry concentration in the sense that A says “yes” when asked, “Is it true that profits are nondecreasing in industry concentration?” But A may not know 2 P in the sense of being able to take unprompted action guided by a plan that rests on P when P is relevant to the situation. In this case, P is always-already salient to A, who does not need to be prompted in order to recognize its relevance to situations in which P is relevant. Knowledge of P does not guarantee to A the salience of P, but only guarantees that A will act as if P is true if P already is or is made salient. Level 2 awareness of P guarantees that A acts on P where P is relevant—that is, A finds P to be salient where P is relevant.

Via axioms that formalize commonsense intuitions about epistemic states and epistemic rationality, the literature on epistemic game theory, dating back to Aumann (1976), attempts to force certain properties on the epistemic states of agents. Positive introspection requires that if an agent knows P, s/he knows that s/he knows P (Modica and Rustichini 1999). However, this requirement is too strong and weakens a distinction—between knowledge and level 2 awareness—that is often relevant to modeling social interactions among real agents who may be prone to temporarily imperfect recall of what they know.

For instance, one agent may purposefully ask another agent questions that remind the latter about what s/he (really) knows, as a way of manipulating him or her into a particular thought or action. Harry could ask Sally an open-ended question about how the last quarter went, expecting Sally to interpret the question as about revenue and costs and not about technology, but (Harry knows and knows that Sally knows) there has been a major technical delay, and he makes use of her lack of awareness of the delay to demonstrate her ineptness as a manager. One way to represent this set of conditions is to claim that Sally’s recall is imperfect, but doing so establishes perfect recall as a precondition for epistemic rationality that is too strong. Human agents differ with respect to their knowledge of what they know (or believe); their memory can be affected by the pragmatics of their interactions and their situational constraints in ways that are adaptive and useful, and should therefore not be deemed irrational or un-rationalizable without further inquiry.

In contrast to positive introspection, negative introspection requires that an agent know that s/he does not know something that s/he in fact does not know (Modica and Rustichini 1994). This condition is also too strong, for it requires that there be no unknown unknowns that are nevertheless relevant to an interaction. This is a counterintuitive and unappealing requirement. Think of the proposition He is about to kill us as part of the epistemic space of Heather, the young wife of Robert, a serial killer who has successfully played a model husband at home. It is possible—though strained, odd, and unrealistic—to say that Heather knows that she does not know that Robert is about to kill her and her children; it seems far more realistic and useful to say that Heather knows neither that He is about to kill us nor that she does not know it. Rather, she is oblivious of it: the proposition is simply not part of her (propositional) “state space.”

We distinguish between states of awareness (positive introspection) and states of ignorance (negative introspection), unawareness, and oblivion as follows.

Ignorance. A does not know P, but knows he does not know P: ~(AkP) & Ak(~AkP). Ignorance of P (which can also be more loosely understood as uncertainty about P) is such that A pays attention to the values of variables that are relevant to the truth value of P, but does not actually know those values or know the truth value of P. Ron does not know whether or not P = Industry profits rise with industry concentration is true or false if he is ignorant about P, but, in a situation where the truth value of P is relevant, he seeks out signals that inform the truth or falsity of the proposition (because he knows he does not know it and wishes to determine its truth value).

Unawareness. A knows P but does not know she knows it: AkP & ~Ak(AkP)). In this state, A may have temporarily forgotten that she knows P—although she may well answer “yes” if asked whether or not P is true. For example, Lynda may answer “yes” to the question “Is it true that it was raining last Wednesday?” but may have answered “I do not know” to the question “What was the weather like last Wednesday?” In this case, there is a leading aspect to the question that affects the information known to the agent; however, it is not correct to say that the agent does not know what the weather was like last Wednesday. It is then possible—and potentially highly relevant—that A knows P but does not know s/he knows P (i.e., AkP & ~(Ak(AkP)).

On the other hand, lack of knowledge combined with the lack of knowledge about the lack of knowledge is represented as follows.

Oblivion. A does not know P and does not know that he does not know it: ~(AkP) & ~(Ak(~AkP)) &. . . . In this state, A neither knows P nor is heedful, in any way, of P or of information that can inform A about the truth value of P: he does not pay attention to variables or experiences that can affect P’s truth value. If technology T1 is relevant to a future dominant product in industry I but completely irrelevant to the current product set, then an incumbent A may be said to be oblivious of the proposition P = T1 will supply the next I-dominant product design if he does not ask himself, when exposed to T1, “What difference will T1 make to I?” Oblivion is fundamentally different from ignorance: an ignorant A raises questions about the subject P when P is relevant (and made salient), and an A who is level 2–aware of his ignorance of P raises questions about P when P is relevant, but an A who is oblivious of P does neither: P is outside of A’s possible “zone of heed.”

Degree of Belief. Agents’ epistemic commitments to propositions about the world may be uncertain and risky. We can relax the stringent requirement that knowledge (or ignorance) imposes on an agent’s epistemic state by referring to his/her beliefs regarding the truth value of a proposition, which can be measured via a degree of belief that can itself be either a probability (mathematical, statistical, or subjective) or some other weight (a fuzzy logic weight that does not have to satisfy the axioms of probability theory). In this case, we denote the epistemic state of agent A who believes P as AbP and the state of an agent who attaches degree of credence w to the truth of proposition P as AbwP. In the introduction of basic epistemic states that follows, we refer to states of knowledge by A of P, with the understanding that states of knowledge can be replaced by states of belief or w-strong belief without loss of generality.

Our definition of knowledge states can be further usefully augmented by an epistemic qualifier that safeguards an agent’s epistemic state and function as a warrant for action predicated on that state. An agent may believe that a proposition, P, is true to a certain degree, w, but within the set of beliefs to which an agent attaches a similar degree of credence, there is an important distinction between a w-strong belief and a belief that “were P true (or false), then the agent would know it or come to know it.” This subjunctive epistemic state relates to the confidence that the agent has in her own epistemic rationality. Instead of modeling this state as a second-order probability—a level of credence in the degree to which the agent lends credence to the weight with which he believes P, which gives rise to a regress—we model it as a special propositional belief. In its most basic form, this meta-epistemic state can be defined as follows.

Confidence (Con). A knows that if P were (not) true, A would (not) know P: Ak(“PTAkP”), where the double quotation marks signify the subjunctive form of the enclosed phrase. Con captures A’s belief about her own epistemic capabilities and virtues relative to a particular proposition. A binary measure, which can be rendered continuous if necessary, Con can determine A’s belief in her own epistemic states and answer the question “Does A believe what she knows?” Note that the converse question—“Does A know what she believes?”—is captured by A’s awareness, as described previously.

Collective Epistemic States. Thus far, we have presented our descriptive tool kit for a core of individual epistemic states—the basic palette of states from which to construct an EDL for networked agents and from which to build an epistemic framework for studying relationality and sociality more generally. To complete the palette, we need a set of collective epistemic states that describe ensembles and groups of interrelated agents, networked or otherwise. We consider a network of agents and propositions—an epinet—modeled by a graph, G, not necessarily fully connected, and we distinguish the following collective, or network-level, epistemic states.

Distribution or Sharedness. A knows P and B knows P and C knows P . . . : AkP & BkP & CkP . . . Distribution measures the spread of P throughout a network of agents, G, either in absolute terms (the total number of agents that know P) or in relative terms (the proportion of agents in the network G that know P).

Collective Awareness of Level n. A knows n P and B knows n P and C knows n P . . . : AknP & Bk nP & Ck nP . . . Collective awareness measures the degree of level n awareness about P in network G, either in absolute terms, as the number of agents that know n P, or in relative terms, as the proportion of agents in G that know n P. Level 2 collective awareness (k 2) guarantees that the agents in G that know P also find P salient when P is relevant (because they know that they know P). As is the case with knowledge distribution, awareness is not an interactive epistemic property of a network: it simply guarantees that the agents in a network know P and find P salient when P is relevant. Awareness aids coordinated action in mobilization-type models, but cannot vouchsafe it because an interactive knowledge structure is still required to induce action. To streamline the discussion, henceforth we refer to “knowledge” rather than “awareness” unless otherwise noted.

Near-Commonality of Level n (NCn). A knows P and B knows P and A knows B knows P and B knows A knows P, and so forth, to level n: AkP & BkP & AkBkP & BkAkP & . . . abbreviated as (AkBk)n(P) & (BkAk)n(P). Commonality of level n measures the level of mutual knowledge relative to P of the agents in network G. It is a measure of the coordinative potential of the network in the sense that many coordination scenarios not only require agent-level knowledge of focal points or coordinative equilibria and a selection criterion among them; they also require agent-level knowledge about the knowledge of other agents and about the knowledge that other agents have of the agent’s own knowledge (Schelling 1978). Level 2 commonality (mutual knowledge) and level 3 commonality of knowledge about some proposition P (i.e., AkBkAkP & BkAkBkP) are therefore particularly important for studying network-level mobilization and coordination. The value of closure to agents in a densely connected network is materially affected by the agents’ ability to accurately and reliably co-mobilize and coordinate their activities for a common purpose.

Commonality (C). NC n with n = ∞. A knows P and B knows P and A knows B knows P and B knows A knows P, and so forth, as n increases without bound: (AkBk)(P) & (BkAk)(P). This is the full-common-knowledge state typically assumed to be a precondition for the deployment of game-theoretic analyses (Brandenburger 1992)—usually without postulating a mechanism or process by which knowledge becomes common or almost common.

Drawing on the building blocks illustrated in Figure 2.1, a basic notational form for epinets is illustrated in Figure 2.2 for a range of the individual and collective epistemic states just introduced. Nodes in an epinet are human agents (circles), propositions in subject-qualifier-predicate form (squares) that may be reflexive (i.e., about the state and structure of the network), and edges (links) represent the “knows that,” “believes that,” “conjectures that,” and so forth, relations that link agents to propositions (AkP) or agents to other agents to propositions (AkBkP). The epinet as a whole is a directed graph, G ({agents, propositions}; {epistemic links}), that encodes the relevant agents and epistemic links among them and the set of propositions relevant to their behavior and interaction.

Figure 2.2 Epinet with Epistemic States

source: Adapted from Moldoveanu and Baum, 2011.

The Significance of Epistemic Levels

The perceptive reader will have noticed that the epinet in Figure 2.2 makes distinctions that are not customary in traditional game-theoretic analyses. She might ask, “Why distinguish degrees of commonality of knowledge and belief among agents in a network when representing their interactive epistemic states?” As Ayres and Nalebuff (1997) observe, traditional game-theoretic analyses have hampered themselves by assuming that information is either private (one and only one agent knows it, knows s/he knows it, etc.) or common knowledge (all agents know it, know they all know it, etc.). Yet many of the most interesting phenomena that arise in social interactions involve a finite level of interactive knowledge: agents’ beliefs about other agents’ beliefs (level 2) or agents’ beliefs about other agents’ beliefs about their own beliefs (level 3).

Ayres and Nalebuff consider a labor–management dispute that has resulted in a strike. The strike costs labor $10,000 per day and costs management $30,000 per day. Management has a total “strike budget” of $3,000,000; labor, one of $700,000. Management can thus hold out for 100 days, while labor can hold out for only 70 days (assuming that the party whose funds have run out will have to concede and accept the other’s terms and conditions). If management agrees to labor’s demands, its total cost will be $1,000,000; if the strike lasts for more than 33 days, management should give in today. If labor “wins,” it stands to gain the equivalent of an additional $600,000 in job security–related pay, so it should be willing to strike for at most 60 days.

To render this negotiation “tractable” to analysis from a “rational agent” perspective in which labor and management act in their best interest and know the logical consequences of what they already know, we usually make an assumption about the structure of information. Specifically, we assume that all the figures just given are “common knowledge” between labor and management: labor knows it, management knows it, labor knows management knows it, and so forth, ad infinitum. But what if one bit of information is not common knowledge in this way? In particular, suppose that the size of the labor’s strike fund is not common knowledge. How do successively higher levels of knowledge of this fact affect the dynamics of the negotiation? Ayres and Nalebuff’s analysis of this scenario reveals the dependence of the negotiation on “beliefs about beliefs” by working backward from the point past which labor can no longer hold out—day 70.

Level 1: Suppose that management knows the size of labor’s strike fund. On day 70, then, management rationally should not concede because it knows that labor has to concede the following day, giving management a $1,000,000 benefit for the $30,000 cost of another day. In fact, the same logic applies on any day after day 37 because, after that day, management can wait for 33 days to register a positive gain.

Level 2: If labor knows that management knows the size of labor’s strike fund, then it is rational for labor to plan on quitting immediately after day 37, if that point in the strike is reached.

Level 3: If management knows that labor knows that management knows the size of labor’s strike fund, then it should not quit any day after day 4 of the strike, as its payoff to maintaining its position will be net-positive.

Level 4: If labor knows that management knows that labor knows that management knows the size of labor’s strike fund, then labor will realize that it should quit on day 4 of the strike because management has no incentive to give in thereafter.

Level 5: If management knows that labor knows that management knows that labor knows that management knows the size of labor’s strike fund, then management will not concede on any of the first four days, as all it has to do for labor to feel compelled to concede is wait a maximum of four days.

Level 6: If labor knows that management knows that labor knows that management knows that labor knows that management knows the size of labor’s strike fund, then labor should rationally decide to concede right away.

The analysis appears to “solve” the game by a six-fold iteration of the knows operator, coupled with some (possibly optimistic) assumptions about the rationality and logical prowess of the agents. The important thing is that every (interactive) iteration of the knows operator produces a new and important bit of information about what a rational agent endowed with the resulting epistemic state should do. Each difference in epistemic state generated by such an iteration makes a difference in the structure of the game and, contingent on the participants being rational in at least the two ways specified previously, each difference in epistemic states should also make a difference in their actions.

Now consider how one can explain what “actually happens” in a real negotiation that has the same payoff structure as that described. Suppose labor concedes to management’s demands on day 6 of the negotiation. What do we make of this? The conservative interpretation from the standpoint of preserving the rational agent assumption is to posit that labor “did not think far enough ahead”; it “did not get to level 4” in the belief hierarchy outlined earlier. This interpretation has the advantage that we can still operate comfortably within the bounds of a rational agent model, parameterized according to the number of logical operations (in this case, iterations of the believes or knows operator, and the logical consequents of applying either operator to some propositional belief) that an agent can be presumed to be capable of performing. Of course, this does not explain why labor conceded on, say, day 6 rather than day 5, 7, or 15, but, of course, that is what “noise” is for in all such explanations. So far, we are both well inside the rational choice mode of modeling agents and making full use of the interactive belief hierarchy: differences in what an agent believes about what another agent believe ok make a real difference in the explanation and in our understanding of the situation.

Suppose further that we want to determine if our overall explanatory framework is on the right path and that we are open to the possibility that the rational choice model, along with some additional informational and meta-informational assumptions, is not in fact the right explanatory framework. If we interview and survey members of labor’s negotiating team, we can design a protocol for probing both their rationality and their degree of “logical prowess.” Suppose that such a protocol, once implemented, reveals that members of the team have not thought about what management believes labor knows about the size of labor’s strike fund, and that they are genuinely surprised that such a thing matters. Suppose further that the interview and survey results show that labor conceded because of an internal quarrel within the negotiating team that might have undermined the cohesion and credibility of its negotiating position.

There is, of course, no need to “throw out” either the rational actor or the belief hierarchies model: all things considered, it may be rational to allocate our attention and cognitive resources to issues that are imminent and impactful, and it may well be that within the negotiating team an interactive belief hierarchy is nevertheless of explanatory value regarding the internecine conflict that has destabilized labor’s collective resolve. We do, however, need to augment our model of the agents’ epistemic states in order to account for labor’s apparent state of oblivion regarding management’s knowledge of the size of labor’s strike fund.

Suppose, instead, that the interviews and questionnaires reveal that the negotiating team had known, on day 1, all the way up to level 6 of our belief hierarchy, but after day 1, caught up in the venomous negativity of the dispute, acted as if had forgotten that they knew it but then conceded as soon as they remembered it. Here again, we would be well advised to introduce language to help us make sense of a situation in which an agent knows a proposition and also knows that s/he knows it, at all relevant times, which we can call awareness of that proposition. The rational agent model usually constrains us to assume that agents always know what they know and that they can recall it “on cue” whenever that knowledge becomes relevant. So, although the imperfect recall scenario violates at least some versions of such a model, the epinets are nevertheless useful in spite of the fact that the explanations they produce deviate from normative requirements of perfect recall.

The foregoing analysis illustrates two basic points. The first is that epistemic states and belief hierarchies make significant differences both in how we analyze and in how we explain social behavior, even if we stray from rational agent and game-theoretic models of behavior that have traditionally been associated with interactive belief hierarchies. The second point is that it is clearly helpful to augment the vernacular of traditional rational agent and game-theoretic models to include more complex states such as awareness and oblivion.

The same kind of epistemic levels analysis can be applied to the interaction structure and plot dynamics of the plays we considered as intuitive motivating examples in Chapter 1. For instance, in Arsenic and Old Lace, Kesselring sets up an initial epistemic network (from the perspective of the audience), presented in Figure 2.3, that has all of the trappings of an “unusual, but nothing more serious than that” contemporary family. The epinet evolves to new state, shown in Figure 2.4, that is nevertheless coherent for each character, with new information revealed by the plot: the discovery of the first body in the basement, and, with the discovery of yet another body, evolves to the further set of interconnected epistemic states presented in Figure 2.5.

The agents play out, for the audience’s benefit, a sort of “epistemic nightmare” that can be tracked through the changes in the state of the epinet that describes the group of agents as a whole. It is important for the epinet to comprise both the specific characters and a specific and dynamically changing set of propositions about which they have beliefs (even though very different beliefs). This is because several characters are ex ante oblivious of several different facts (called out by propositions), and it would be inaccurate to introduce these propositions (even with probability zero) in the description of the epistemic states of the characters at the beginning of the play. Changing the epistemic state of an agent from oblivion to awareness is very different (and this is where most of the dramatic impact accrues) from changing the agent’s epistemic state from ignorance to knowledge. Moreover, as social life resembles drama far more often than it does chess, there is much that a modeler can learn from the epistemic state dynamics of a dramatic production.

Figure 2.3 Epinet: Arsenic and Old Lace—Initial

Figure 2.4 Epinet: Arsenic and Old Lace—Discovery of the First Body

NOTE: Italicized text indicates changes from initial epistemic states described in Figure 2.3.

Figure 2.5 Epinet: Arsenic and Old Lace—Discovery of the Second Body

NOTE: Italicized text indicates changes from epistemic states at the time the first body is discovered described in Figure 2.4.

Using Epinets to Describe the Epistemic Glue of Everyday Social Interactions

The everyday epistemic tangles we considered in Chapter 1 can also be mapped out as epinets to lay bare the logical structure of each situation. Doing so allows us, in many cases, to understand the various consequences of epistemic mismatches and other malentendus.

Vignette 1. Recall that Alice is seeking a job in Bob’s firm and is being interviewed by Bob for this purpose. Alice has made some false statements on her resume, such as that she won an engineering contest in college; let this be statement q. The statement “Alice did not win the engineering contest” can be denoted p. Unknown to Alice, Bob knows that his son won an engineering contest that year at the same college, and he suspects—but does not know—that it is the same contest that Alice claims to have won. So we have that Alice believes p, as does Bob. Alice believes her deception is successful, so she believes that Bob believes q, and Bob believes she believes she has succeeded, so he believes that Alice believes he believes q. However, Bob believes Alice believes p. Because she is quite certain of the success of her deception, Alice does not go through any further levels of interactive reasoning; Bob, because he believes she is deceived about his being deceived, is able to avoid revealing his doubts, leaving Alice deceived. Alice and Bob’s epistemic states are illustrated graphically in Figure 2.6.

How forgivable is Alice’s deception? Believing that Alice is deceiving him, Bob seeks evidence in Alice’s behavior of “tricks” and “maneuvers” that are meant to lead him astray. He may even interpret some of her nondeceptive behaviors as being inspired by a deceptive intent. All of these bits of evidence may stack up against Alice to the point at which Bob decides to call her bluff, and they figure in Bob’s decision to punish Alice without first confronting her with the discrepancy in order to determine whether or not what seems to him to be a lie is in fact a misunderstanding.

FIGURE 2.6 Epinet for Vignette 1

Vignette 2. Alan sends an electronic message, p, to Belinda and also sends it to Charles, who is Belinda’s boss, as a “blind carbon copy.” (bcc). Thus, Alan believes Belinda believes p, Belinda believes Alan believes p, Alan believes Charles believes p, and Charles believes Alan believes p. However, Belinda does not believe that Charles believes p—otherwise, the “carbon copy” (cc) would have appeared on the message, even though Charles believes Belinda believes p and moreover believes that Belinda does not believe that Charles believes p. Charles can therefore “play with” Belinda, assuming that she does not believe that he believes p. Charles also knows that Alan believes that Belinda does not believe that Charles believes p and therefore can also wield some power over Alan by credibly threatening to respond to Alan’s message p and sending a copy of the response to Belinda, on which Belinda’s original “bcc” appears. These epistemic states of Alan, Belinda, and Charles are mapped out in Figure 2.7.

FIGURE 2.7 Epinet for Vignette 2

Using Epinets to Describe the Epistemic Structure of Social Networks

The elements of the EDL we have introduced allow us to refer to parts of a social network in terms of the degree to which agents comprising it share epistemically coherent belief hierarchies. The logical (syntactical) and semantic coherence of underlying beliefs has for some time been known to function not only as an important logical precondition for the achievement of equilibria in games but also as an important facilitator of coordination and co-mobilization (Chwe 1999, 2001). These forms of coherence have been used as explanatory concepts in reverse models that seek to explain observed patterns of interaction as outcomes of an equilibrium set of strategies selected by a group of interacting agents given a set of beliefs and utilities. However, they have not been used in forward models to make predictions about the specific sets of agents most likely to successfully coordinate and co-mobilize. That is the use we first make of epistemic states and the epinets they form. In particular, we use the states of epistemically linked agents to describe a set of structures (defined by agents, what they know, and what they know others know) for making predictions about achievable coordination and co-mobilization regimes. The set of structures include the following.

Mutual Knowledge Neighborhood (NKn(G)). Subnetwork SG of network G that shares level 2 almost-common knowledge of P. Mutual knowledge neighborhoods can be used to describe networks in which there is knowledge about shared knowledge. Deliberations of professional associations (e.g., the American Medical Association, the Institute of Electrical and Electronics Engineers) can be understood as turning shared knowledge into mutual and sometimes common knowledge as they unfold: each participant discloses by communicating what he or she knows, which in turn becomes known to the other participants. Mutual knowledge undergirds subnetwork mobilization in situations characterized by “I’ll go if you go” scenarios (Chwe 1999, 2000). Suppose A knows “I’ll go iff B goes” and B knows “I’ll go iff A goes.” A will not mobilize unless she also knows of B that he’ll go iff she goes and B will not mobilize unless he knows of A that she’ll go iff he goes. If A does indeed know that B will mobilize if and only if she mobilizes, and B knows that A will mobilize if and only if he mobilizes, then mobilization can take place in the two-agent neighborhood (A, B).

Almost-Common Knowledge Neighborhood (NSE(G)). Linked subnetwork SG of network G that shares level n common knowledge of P. “Full-common knowledge” neighborhoods (n = ∞) can be used to model networks in which relevant knowledge P is self-evident. This condition obtains, for instance, in a situation in which P is uttered by one agent in the presence of all other agents, resulting in an epistemic condition where everyone knows P, everyone knows that everyone knows P, and so forth, ad infinitum. Almost-common knowledge of a finite level n is always a weaker condition than full-common knowledge. If a proposition is common knowledge, it is also mutual knowledge (level 2) and almost-common knowledge (level 3) but not vice versa.

Because commonality of knowledge about relevant facts and about the ways in which agents think about these facts is such an essential condition for network coordination phenomena, and because it can (at least in theory) be established through face-to-face communication (e-mail trails and exchanges can at best establish almost-common knowledge of level n, as Rubinstein (1986) demonstrates), understanding it as an epistemic condition that enables coordination highlights the importance of such otherwise curious phenomena as face-to-face meetings in the age of ubiquitous untethered broadband communications. However, almost-common knowledge is often a sufficient condition for coordination among agents. If Amy and Ben are trying to meet one another on a small island that has a single large hill, but neglect to establish a meeting place beforehand, then a sufficient condition for them to solve their coordination problem is that (1) Amy knows Ben knows Amy knows Ben will head for the hill and (2) Ben knows Amy knows Ben knows Amy will head for the hill (i.e., n = 3). Level 2 and level 3 almost-common knowledge represents important conditions for co-mobilization and coordination in networks and, accordingly, preconditions for turning network closure into social capital whose value resides in enabling co-mobilization and coordination.

Figure 2.8 graphically depicts the different “epistemic regions” of a network—in terms of the sharedness (distribution, commonality of various degrees) of agents within the network with respect to a proposition (or a set of propositions), P—that encode facts of joint relevance to the epistemically connected agents.

Network Epistemics and the Explanation of Social Network Phenomena

The epistemic landscape and the topology of epinets describing the beliefs relevant to a set of networked agents seem crucial to social network analysis; the structure and dynamics of the landscape constitutes a critical set of explanatory mechanisms for well-established social network effects such as coordination, brokerage, and closure. Let us now briefly consider a variety of familiar network problems—network formation, coordinated mobilization, status, trust, and information spread—in order to illustrate, in a preliminary manner, the ways in which our EDL and a theory of network epistemics can help deepen and sharpen the explanatory power of social network analysis. We emphasize the logical dependence of existing explanatory models on assumptions about the underlying epistemic states of connected agents.

Figure 2.8 Epinet Describing a Social Network in Terms of Epistemic States

source: Adapted from Moldoveanu and Baum, 2011.

Network Formation. Economic treatments of network formation typically ask which networks tend to form when self-interested agents are free to associate with any other agents; such treatments also inquire about the relative efficiency of different patterns of association as a function of (1) the value function mapping each trade or sequence of trades to a value metric and (2) an allocation rule R that maps each (n + 1)-tuple made up of the value v associated with an n-agent interaction and the specific n agents that participated in the interaction into an n-tuple comprising the fractions of v claimed ex post by each of the n agents (Hendricks, Piccione, and Tan 1999; Jackson 2005). Empirically minded networks researchers have reason to ask, “What are the epistemic conditions that support the resulting predictions of efficient network formation?”

At the oblivious end of the epistemic spectrum, if there is a value v to be realized by agents j and k by transacting, but they are not both aware of it, we expect that the transaction will be foregone, in spite of the fact that it would have been advantageous for both agents to engage in it. If only j knows of v, then, to be able to predict that v will materialize, we need to posit a mechanism by which j will credibly inform k of it. Not any mechanism will do, as the large body of literature on cheap talk illustrates (Farrell 1996): only one will make j’s communication of v credible to k. Postulating such a mechanism involves us in the conjecture of additional epistemic conditions on j and k: k must either trust j (raising questions about the epistemic conditions under which trust obtains) or have a communicative interaction that makes j’s transmission of v to k self-reinforcing, which also involves a set of assumptions about k’s epistemic states on behalf of j.

Coordinated Mobilization. Chwe (1999, 2000) considers a model for social mobilization in which agents are connected in a communication/interaction network whose structure is common knowledge among them. Each agent has two options—mobilize (M) and do not mobilize (NM)—and a utility function that associates a payoff with each of the possible consequences of that agent’s actions. These consequences depend on whether or not a sufficient number of other agents in the same network mobilize, and this condition holds for each in turn. Chwe (2000) argues that common knowledge of payoffs at the level of a minimum mobilization unit determines the minimum sufficient network that mobilizes: an agent mobilizes (as measured by the net benefit) iff at least two of his neighbors mobilize as well, and each neighbor knows this and knows that every one of the three neighbors knows it. In this way, mobilization takes place.

Interactive epistemic states are crucial to mobilization models of this kind. One can take cliques (fully connected subnetworks) to be synonymous with pockets of local common knowledge. But they need not be synonymous, as members of a clique need not all trust one another to the level required for communication establishing that common knowledge will take place. Consider a situation in which there is a clique-wide shared suspicion that one group member is a spy—a suspicion that is mutual knowledge, but not common knowledge, among all members but the putative spy. Establishing common knowledge of the spy without tipping him/her off requires precisely the kind of communicative trust that the clique does not enjoy. Nor is full-common knowledge necessary in many mobilization scenarios. Mutual knowledge of co-mobilization conditions may suffice.

One may communicate with several others who are all communicating with each other (i.e., be part of a clique), yet the mobilizability profile of each agent may not come to be common knowledge among clique members. The implication is that we cannot infer common knowledge of mobilization thresholds from clique membership and that we need tools that help us describe epistemic states and events in networks in greater detail (i.e., in terms of level 1, 2, 3 . . . almost-common knowledge and more complex epistemic conditions like trust) The tools required must build leading indicators of mobilization potential within a network.

Status. The status of an agent in a network is an important explanatory variable in theories of interorganizational networks (Podolny 1993; Benjamin and Podolny 1999) and is sometimes defined by the agent’s Bonacich centrality (Bonacich 1987), which measures the relative connectedness of agents to other agents in the network who are themselves well-connected. The Bonacich measure is easily applied to networks in cases where agents are (well-) cited, referenced, publicized, and so forth, in forums that are (well-) attended—hence, its shorthand description as an agent-specific measure of connectedness to the well-connected or of knownness to the well-known.

Bonacich centrality does not, however, capture many of the ways in which status differences matter, because (1) it does not admit any semantic content (i.e., one is well connected to the well-connected or not, rather than known for positive/negative quality P by those who are themselves known for positive/negative quality P); and (2) it does not easily allow for an intuitive recursive generalization to higher levels of connectedness that may matter (i.e., knownness to those who are themselves well known for their well-knownness to well-known others). A precise description language for epistemic states accommodates useful extensions of classical status measures, and it allows us to posit mechanisms by which agents can change their status through the strategic manipulation of the semantic contents of their measures of relative knownness (i.e., in the structure of what they are known for).

Trust. Trust, a tricky variable that can benefit from detailed epistemic analysis, is often defined as “the mutual expectation of cooperative behavior” (Burt and Knez 1995). More detailed definitions are possible, but let us unpack this common one. We focus on “mutual”: j expects k to act cooperatively and vice versa. But if j does not expect k to expect her to act cooperatively, there will be situations (e.g., a finite-horizon prisoner’s dilemma game payoff structure) in which it is logically incorrect for j to expect k to behave cooperatively. This induces a belief structure for the pair in which (1) j trusts k but (2) j does not expect k to trust her and therefore (3) j finds it irrational to trust k, by the definition of mutual expectation of cooperation, which may or may not lead to a reversal of (1), depending on how j resolves the inconsistency that results. Persevering in believing (1) has costs for j, who must come to grips with the resulting proposition: (4) I am irrational, absent some other reason for continuing to trust k while knowing that k does not trust her. To avoid such problems, the expectation of cooperative behavior must be at least mutual knowledge between j and k: each must know the other knows it lest we run into problems of logical coherence.

Now we focus on “cooperative”: j can have cooperative intentions that are causally linked to behavior that k may deem counter-cooperative: j may intend to help k, for example, but produce behavior that hurts him. Thus, the precise mapping of observable states of the world onto imputed intentions should be mutual knowledge between j and k as well: j and k should impute the same “intentionality” to the same observable behaviors they see each other produce.

This is complicated but by itself still insufficient: the expectation of cooperative behavior is not restricted to observed behaviors, but also involves unobserved behaviors. If I trust you (by the standard definition), I am confident that you will act cooperatively in circumstances that neither of us has experienced before: I know that “whatever happens, you will act cooperatively.”

Finally, we focus on “whatever happens”: to infer that you have acted cooperatively in a situation neither one of us had anticipated, the rules that you use to map a “behavior” onto an “intention” must be known to me and the rules that I use to accomplish the same feat must be known to you: they must also be mutual knowledge. No doubt, trust is an elusive phenomenon. Indeed, it is elusive in part because it is an epistemically complicated phenomenon but nevertheless an intelligible one that can be elucidated through analysis of its epistemic structure. The epistemic analysis of trust aims to turn an elusive phenomenon into a merely “complicated” one.

Information Spread. Decay functions are standard ways in which network theorists produce (network-distance-weighted) measures of inter-agent separation (Burt 1992). If j is distance D away from k (in network terms, D is usually denoted by the number of network edges that j must traverse to get to k), then a plausible “decay function” that can be used to discount the strength of the tie between j and k is erD, where r is some constant. The conductivity of the network is usually poor and almost always assumed to be isotropic, or identical in all directions. Information is distorted, misreported, unreported, and dis- or mistrusted as it makes its way through the “grapevine,” but these informational decays are modeled as similar across different grapevines.

Is decay an inevitable function of network distance? If so, under what conditions is it isotropic? Can we not have superconductive paths and subnetworks within a network that preserve both the accuracy and trustworthiness of the information reported? If such superconductive paths exist, then we can expect to find different levels of conductivity—and, accordingly, network anisotropy—similar to the variety we see at the levels of cliques, brokers, and center-periphery networks (Burt 1992). The keys to a more precise model of network conductivity are the usual suspects for an applied epistemologist: trust and status (or reputation), each of which needs an epistemic undergirding.

Does the Epistemic Description Language Deliver on Our Desiderata?

We began this chapter with a set of objectives that our EDL should deliver on, and this is a good time to take stock and appraise its performance under the rubrics we introduced:

Precision and Subtlety. The EDL supplies a syntax that represents the various ways in which human agents relate to (truth-functional) propositions in a precise way, but it does not restrict, ex hypothesis, the range of epistemic states that they can be represented as holding. Unlike standard state space models in game theory, our EDL does not preclude agents’ states of ignorance and oblivion vis-à-vis propositions; also, it allows us to distinguish between agents’ epistemic models—what they think is true and what they think others think is true—and our models of agents’ epistemic models—what we think is true and what we think agents think other agents think is true. The resulting epistemic logic is also sufficiently flexible to accommodate more subtle epistemic states—such as subjunctive beliefs underlying intuitions about confidence—for which standard epistemic models do not have “conceptual room.”

Operationalizability. The EDL’s conceptual and graphical description tool kit makes it quickly operationalizable for rapid mapping of arbitrarily sized epinets. It does not require complex “translation” between the standard tools of network researchers (surveys, questionnaires) and the language in which agents and their epistemic states are represented—as is required, for instance, by Harsanyi’s (1967, 1968a, 1968b) theory of types.

Logical Auditability. The EDL maps epinets onto a set of statements (e.g., AkBkAkP) that represent propositions whose truth values can be independently ascertained and that, moreover, can form the basis of higher-level epistemic states. The resulting statements allow us to describe “who knows what about whom, and when” in a social network and to derive hypotheses about the logical implications of the epistemic states of each agent.

Fit with Theoretical and Empirical Models. Because epinets are themselves networks, they permit treatment using the basic analytical tools employed by network researchers generally. These tools typically specify and measure the topological properties of a network—clustering, closure, centrality, diameter—and make predictions about the evolution of the network on the basis of such properties. Because it is an expanded, directed graph, an epinet is amenable to precisely the same analyses. Moreover, because epinets are representable as statements using epistemic logic operators, they are also compatible with the tools for analyzing human interactions through the precise representation of “everyday” insights in a formal language.

Summary

We have introduced a language for describing the epistemic states of human agents and the relationships among them. This language, EDL, allows researchers to measure and represent epistemic states and structures based on surveys and empirical research instruments with which they are familiar, and it allows modelers to symbolically manipulate epistemic state descriptors to derive predictions (or to create proofs regarding epistemic preconditions) regarding the statics and dynamics of epistemic structures in social networks. We are now in possession of an epistemic network modelers’ tool kit that we will use to illuminate and deepen our understanding of a variety of network relations, structures, and phenomena.

If you find an error or have any questions, please email us at admin@erenow.org. Thank you!