Common section

13

Belief Logic

Our belief logic is “logic” in an extended sense. Instead of studying what follows from what, it studies patterns of consistent believing and willing; it generates consistency norms that prescribe that we be consistent in various ways. We’ll start with a simplified system and then add refinements.

13.1 Belief translations

We’ll use “:” to construct descriptive and imperative belief formulas:

1. The result of writing a small letter and then “:” and then a wff is a descriptive wff.

2. The result of writing an underlined small letter and then “:” and then a wff is an imperative wff.

Statements about beliefs translate into descriptive belief formulas:

· You believe that A is true = u:A

· You don’t believe that A is true = ∼u:A

· You believe that A is false = u:∼A

· You don’t believe A and you don’t believe not-A

= (∼u:A • ∼u:∼A)

If you refrain from believing A, you might believe that A is false or you might take no position on A. Here are some further translations:

· You believe that you ought to do A

= u:OAu

· Everyone believes that they ought to do A

= (x)x:OAx

· You believe that if A then not-B

= u:(A ⊃ ∼B)

· If you believe A, then you don’t believe B

= (u:A ⊃ ∼u:B)

Since our belief logic generates norms prescribing consistency, it focuses on imperative belief formulas – which we express by underlining the small letter: 0291

· Believe that A is true = u:A

· Don’t believe that A is true = ∼u:A

· Believe that A is false = u:∼A

· Don’t believe A and don’t believe not-A

= (∼u:A • ∼u:∼A)

· Believe that you ought to do A

= u:OAu

· Let everyone believe that they ought to do A

= (x)x:OAx

As before, we distinguish between if-then and don’t-combine forms:

· If you in fact believe A, then don’t believe B

= (u:A ⊃ ∼u:B)

· Don’t combine believing A with believing B

= ∼(u:A • u:B)

13.1a Exercise: LogiCola N (BM & BT)

Translate these sentences into wffs (use “u” for “you” and “G” for “There’s a God”).

You believe that there’s a God. (You’re a theist.)

u:G

1. You believe that there’s no God. (You’re an atheist.)

2. You take no position on whether there’s a God. (You’re an agnostic.)

3. You don’t believe that there’s a God. (You’re a non-theist.)

4. You believe that “There’s a God” is self-contradictory.

5. Necessarily, if you’re a theist then you aren’t an atheist. (Is this statement true?)

6. Believe that there’s a God.

7. If “There’s a God” is self-contradictory, then don’t believe that there’s a God.

8. If you believe A, then you don’t believe not-A.

9. If you believe A, then don’t believe not-A.

10.Don’t combine believing A with believing not-A.

13.2 Belief proofs

There are three approaches to belief logic. First, we might study what belief formulas validly follow from what other belief formulas. We might try to prove arguments like this one:

· You believe A.

∴ You don’t believe not-A.

· u:A

∴ ∼u:∼A

But this is invalid, since people can be confused and illogical. Students and politicians can assert A and assert not-A almost in the same breath. Beginning ethics students often write things like this (§4.3): 0292

Since morality is relative to culture, no duties bind universally. What’s right in one culture is wrong in another. Universal duties are a myth. Relativism should make us tolerant toward others; we can’t say that we’re right and they’re wrong. So everyone ought to respect the values of others.

Here “No duties bind universally” clashes with “Everyone ought to respect the values of others.” As Socrates was adept at showing, our unexamined views are often filled with inconsistencies. But then, given that someone believes A, we can deduce little or nothing about what else the person believes or doesn’t believe. So this first approach to belief logic is doomed to failure.

A second approach studies how we’d believe if we were completely consistent. A person X is completely consistent (an idealized notion) if and only if:

1. the set S of things that X believes is logically consistent, and

2. X believes whatever follows logically from set S.

Our previous argument would be valid if we added, as an additional premise, that you’re completely consistent:

· You’re completely consistent. (implicit)

You believe A.

∴ You don’t believe not-A.

Belief logic would take “You’re completely consistent” as an implicit premise; this would be assumed, even though it’s false, to help us explore what belief patterns a consistent person would follow. While this works,1 I prefer a third approach, in view what I want to do in the next chapter.

1 Jaakko Hintikka used roughly this second approach in his classic Knowledge and Belief (Ithaca, New York: Cornell University Press, 1962).

My third approach generates consistency imperatives, like these:

· Don’t combine believing A with believing not-A.

∼(u:A • u:∼A)

· Don’t combine believing A-and-B with not believing A.

∼(u:(A • B) • ∼u:A)

This third approach will assume that we ought to be consistent – we ought not to combine inconsistent beliefs and we ought not to believe something without also believing whatever follows from it. While this basic idea is plausible (but subject to qualifications, see §13.7), it’s not easy to systematize logically.

Our belief logic adds belief worlds and inference rules to our proof machinery. We represent a belief world by a string of one or more instances of a small-letter constant. Since most of our belief norms use a generic “you,” our belief worlds will typically be “u,” “uu,” “uuu,” and so on. So a world prefix is now a string of zero or more instances of letters from the set , where is the set of small-letter constants. Our two inference rules use belief 0293 worlds; while it’s fairly easy to use these rules mechanically, it’s difficult to get an intuitive grasp of how they work. Let me try to explain them.

First, let a belief policy be a set of imperatives about what someone (typically a generic “you”) is or is not to believe. Here’s an example:

Believe that Michigan will play.

u:P

Be neutral about whether Michigan will win.

(∼u:W • ∼u:∼W)

This policy prescribes a way to believe that’s consistent (but boring). In general, a belief policy prescribes a consistent way to believe if and only if (1) the set S of things that the person is told to believe is logically consistent, and (2) the person isn’t forbidden to believe something that follows logically from set S. Our task here is to express this idea using possible worlds. I want to reject belief policies, such as this one, that prescribe an inconsistent way to believe:

Believe A and believe not-A.

(u:A • u:∼A)

How do we reject such policies using possible worlds?

A belief world (relative to a belief policy about what a person is told to believe) is a possible world that contains all the statements that the person is told to believe. So if you’re told to believe A, then all your belief worlds have A. Individual belief worlds may contain further statements. For example, if you’re told to be neutral about B (not to believe B and not to believe not-B), then some of your belief worlds will have B and some will have not-B. What’s common to all your belief worlds is what you’re told to believe. If a belief policy (about what you’re told to believe) forces a belief world to be self-contradictory, then the belief policy tells you to believe inconsistently; and then (by an implicit “Be consistent” built into the system) we reject the belief policy.

Our first inference rule, B+, says that, if you’re told to believe A, then A is in all your belief worlds: u, uu, uuu, and so on. Rule B+ operates on positive imperative belief formulas; here any wff can replace “A” and any small letter can replace “u”:

B+

u:A → u ∴ A,

use any string of u’s

The line with “u:A” can use any world prefix with no small letters or “W”10294 and the line with “u ∴ A” must use a world prefix that’s the same except that it adds at the end a string of one or more instances of “u” (or of the small letter that replaces “u”). If we have “u ∴ A” in a proof, “u” refers to a belief world based on what you’re told to believe. (If instead we have “Du ∴ A,” then we have a belief world based on what you’re told to believe in deontic world D.)

1 This proviso (about small letters and “W”) blocks proofs of questionable wffs that place one imperative belief operator within another, like “b:∼(c:A • c:∼A),” or claim logical necessity for consistency imperatives, like “☐∼(u:A • c:∼A).”

We can use B+ to prove this consistency imperative: “Don’t combine believing A with believing not-A.” First assume its opposite: “Believe A and believe not-A.” Then use B+ to construct a belief world that contains everything that you’re told to believe. Since this world necessarily has contradictions, “Believe A and believe not-A” tells us to believe inconsistently; then (by an implicit “Be consistent” built into the system) we can derive the opposite: “Don’t combine believing A and believing not-A.” Here’s the proof in symbols:

fig0101

B+ puts the statements you’re told to believe into belief world u. Since world u has contradictions, our assumption prescribes an inconsistent combination of belief attitudes. So we reject it and derive the original conclusion.2

2 Our proof doesn’t show that this conclusion is logically necessary; instead, it shows that it follows from an implicit “One ought to be consistent” premise.

We defined “X is completely consistent” using two clauses:

1. the set S of things that X believes is logically consistent, and

2. X believes whatever follows logically from set S.

While B+ captures the first clause, we need rule B− to capture the second. By B−, if you’re told NOT to believe A, then not-A must be in SOME of your belief worlds. So if you’re told to be neutral about A (NOT to believe A and NOT to believe not-A) then some of your belief worlds will have A and some will have not-A. Rule B− operates on negative imperative belief formulas; any pair of contradictory wffs can replace “A” / “∼A” and any small letter can replace “u”:

B−

u:A → u ∴ ∼A,

use a new string of u’s

The line with “∼u:A” can use any world prefix not containing small letters or “W” – and the line with “u ∴ ∼A” must use a world prefix that’s the same except that it ends with a new string (one not occurring in earlier lines) of one or more 0295 instances of “u” (or of the small letter that replaces “u”).

We need B− to prove this consistency imperative: “Don’t combine believing A-and-B with not believing A. First assume its opposite: “Believe A-and-B, but don’t believe A” – which tells us to believe something but not what logically follows from it. Here’s the proof:

fig0102

By B−, since you’re told NOT to believe A, we put “∼A” into new belief world u (line 4). We put what you’re positively told to believe into the same belief world u and then get a contradiction. Our assumption prescribes an inconsistent combination of belief attitudes. So we derive the original conclusion.

Our proof strategy goes as follows:

· First use rule B− on negative imperative belief formulas (formulas that say to refrain from believing something). Use a new belief world each time. You can star (and then ignore) a line when you use B− on it.

· Then use B+ on positive imperative belief formulas (formulas that say to believe something). Use each old belief world of the person in question each time. (Use a single new belief world if you have no old ones.) Don’t star a line when you use B+ on it.

Both rules operate only on imperative belief formulas (like “∼u:A” or “u:A”) – not on descriptive ones (like “∼u:A” or “u:A”). Our belief worlds are about what a belief policy tells you to believe, not about what you actually believe. Our proof structure is designed to prove consistency norms.

Our recent systems had rules for reversing squiggles; for dropping weak operators (some, possible, permissible); and for dropping strong operators (all, necessary, ought). Belief logic is different, since there’s no convenient weak operator to go with “You believe that A” (the weak operator would have to mean “You don’t believe that not-A”). Belief logic is like a modal logic with “☐” but no “◇”: besides having the drop-box rule for “☐A,” we’d then need a rule saying that from “∼☐A” we can put “∼A” into a new world W (like B−).

Our consistency norms have a don’t-combine form, forbidding inconsistent combinations. They tell you to make your beliefs coherent with each other; but they don’t say what beliefs to add or subtract to bring this about. Suppose that P (premise) logically entails C (conclusion); compare these three forms: 0296

· (u:P ⊃ u:C) If you believe premise, then believe conclusion

· (∼u:C ⊃ ∼u:P) If you don’t believe conclusion, then don’t believe premise

· ∼(u:P • ∼u:C) Don’t combine believing premise with not believing conclusion

Suppose you believe premise but don’t believe conclusion; then you violate all three. What should you do? The first form tells you to believe conclusion; but maybe conclusion is irrational and you should reject both premise and conclusion. The second tells you to drop premise; but maybe premise is solid and you should accept both premise and conclusion. So the first two forms can guide you wrongly. The third is better; it simply forbids the inconsistent combination of believing premise but not believing conclusion – but it doesn’t say what to do if you get into this forbidden combination.

Here’s another example. Assume that A is logically inconsistent with B; compare these three forms:

· (u:A ⊃ ∼u:B) If you believe A, then don’t believe B.

· (u:B ⊃ ∼u:A) If you believe B, then don’t believe A.

· ∼(u:A • u:B) Don’t combine believing A with believing B.

Suppose you believe A and also believe B, even though the two are inconsistent. The first form tells you to drop B, while the second tells you to drop A; but which you should drop depends on the situation. The last form is better; it simply tells you to avoid the inconsistent combination.

Proofs with multiple kinds of operator can be confusing. This chart tells what order to use in dropping operators:

First drop these weak operators:

◇ ∼u: R (∃x)

Use new worlds/constants; star the old line.

Then drop these strong operators:

u: O (x)

Use old worlds/constants if you have them; don’t star the old line.

Within each group, the dropping order doesn’t matter – except that it’s wise to drop “u:” and “O” before dropping the very strong “☐.”

Section 9.2 noted that our substitute-equals rule can fail in arguments about beliefs. Consider this argument:

· Jones believes that Lincoln is on the penny.

Lincoln is the first Republican president.

∴ Jones believes that the first Republican president is on the penny.

· j:Pl

l=r

∴ j:Pr

If Jones is unaware that Lincoln was the first Republican president, the premises could be true while the conclusion is false. So the argument is invalid. But yet 0297 we can derive the conclusion from the premises using our substitute-equals rule. So we need to qualify this rule so it doesn’t apply in belief contexts. From now on, the substitute-equals rule holds only if no interchanged instance of the constants occurs within a wff that begins with a small letter (underlined or not) followed by a colon (“:”).

13.2a Exercise: LogiCola OB

Say whether valid (and give a proof) or invalid (no refutation necessary).

☐(A ⊃ B)

∴ (u:A ⊃ u:B)

· 1 ☐(A ⊃ B) Invalid

· [∴ (u:A ⊃ u:B)

· *2 asm: ∼(u:A ⊃ u:B)

· 3 ∴ u:A {from 2}

· *4 ∴ ∼u:B {from 2}

· 5 u ∴ ∼B {from 4}

· *6 u ∴ (A ⊃ B) {from 1}

· 7 u ∴ ∼A {from 5 and 6}

Since rules B+ and B− work only on imperative belief formulas, we can’t go from “ u:A” in line 3 to “u ∴ A.” The conclusion here has the faulty if-then form. Suppose that A entails B and you believe A; it doesn’t follow that you should believe B – maybe you should reject A and also reject B.

1. ∼◇(A • B)

∴ ∼(u:A • u:B)

2. ∼◇(A • B)

∴ (u:A ⊃ ∼u:B)

3. ∼◇(A • B)

∴ (u:A ⊃ ∼u:B)

4. ∼◇(A • B)

∴ (∼u:A ∨ ∼u:B)

5. ∼◇(A • B)

∴ (u:∼A ∨ u:∼B)

6. ☐(A ⊃ B)

u:A

∴ u:B

7. ☐(A ⊃ B)

u:A

∴ u:B

8. ☐(A ⊃ B)

u:∼A

∴ ∼u:∼B

9. ☐(A ⊃ B)

u:B

u:∼A

10.∼◇(A • B)

∴ ∼(u:A • ∼u:∼B)

13.2b Exercise: LogiCola OB

First appraise intuitively. Then translate into logic and say whether valid (and give a proof) or invalid (no refutation necessary).

1. A logically entails B.

Don’t believe B.

∴ Don’t believe A.

You believe A.

∴ You don’t believe not-A. 0298

2. You believe A.

∴ Don’t believe not-A.

3. ∴ If A is self-contradictory, then don’t believe A.

4. ∴ Either believe A or believe not-A.

5. Believe A.

∴ Don’t believe not-A.

6. ∴ Don’t combine believe that A is true with not believing that A is possible.

7. (A and B) entails C.

∴ Don’t combine believing A and believing B and not believing C.

8. A logically entails (B and C).

Don’t believe that B is true.

∴ Believe that A is false.

9. ∴ If A is true, then believe A.

13.3 Believing and willing

Now we’ll expand belief logic to cover willing as well as believing. We’ll do this by treating “willing” as accepting an imperative – just as we previously treated “believing” as accepting an indicative:

· u:A = You believe that A

· You accept (endorse, assent to, say in your heart) “A is true”

· u:A = You will that act A be done

· You accept (endorse, assent to, say in your heart) “Let act A be done”

In translating “u:A,” we’ll often use terms more specific than “will” – like “act,” “resolve to act,” or “desire.”1 Which of these fits depends on whether the imperative is present or future, and whether it applies to oneself or to another. Here are three examples:

1 “Desire” and similar terms can have a prima facie sense (“I have some desire to do A”) or an all-things-considered sense (“All things considered, I desire to do A”). Here I intend the latter.

· If A is present: u:Au = You act (in order) to do A

· You accept the imperative for you to do A now

· If A is future: u:Au = You’re resolved to do A

· You accept the imperative for you to do A in the future

· If u≠x: u:Ax = You desire (or want) that X do A

· You accept the imperative for X to do A

And to accept “Would that I had done that” is to wish that you had done it.

0299 There’s a subtle difference between “u:Au” and “Au”:

· u:Au = You act (in order) to do A

· You say in your heart, “Do A now” (addressed to yourself)

· Au = You do A

The first is about what you try or intend to do, while the second is about what you actually do (perhaps accidentally).

Section 12.3 noted that we’d lose important distinctions if we prefixed “O” only to indicatives. Something similar applies here. Consider these three wffs:

· u:(∃x)(Kx • Rx) = You desire that some who kill repent

· You say in your heart “Would that some who kill repent

· u:(∃x)(Kx • Rx) = You desire that some kill who repent

· You say in your heart “Would that some kill who repent”

· u:(∃x)(Kx • Rx) = You desire that some both kill and repent

· You say in your heart “Would that some kill and repent

These differ greatly. Underlining shows which parts are desired: repenting, or killing, or killing-and-repenting. If we attached “desire” only to indicative formulas, all three would translate the same, as “You desire that (∃x)(Kx • Rx)” (“You desire that there’s someone who both kills and repents”). So “desire” is better symbolized in terms of accepting an imperative.

This imperative formula tells you to will something:

· u:A = Will that act A be done

· Accept (endorse, assent to, say in your heart) “Let act A be done”

Again, our translation can use terms more specific than “will”:

· If A is present: u:Au = Act (in order) to do A

· Accept the imperative for you to do A now

· If A is future: u:Au = Be resolved to do A

· Accept the imperative for you to do A in the future

· If u≠x: u:Ax = Desire (or want) that X do A

· Accept the imperative for X to do A

Be careful about underlining. Underlining before “:” makes the formula an imperative (instead of an indicative). Underlining after “:” makes the formula about willing (instead of believing). Here are the basic cases: 0300

Indicatives

u:A = You believe A.

u:A = You will A.

Imperatives

u:A = Believe A.

u:A = Will A.

These baseball examples may be helpful:

· Hub = You hit the ball

· Hub = Hit the ball

· OHub = You ought to hit the ball

· RHub = It’s all right for you to hit the ball

· u:Hub = You believe that you’ll hit the ball

· u:Hub = You act (with the intention) to hit the ball

· u:Hub = Believe that you’ll hit the ball

· u:Hub = Act (with the intention) to hit the ball

13.3a Exercise: LogiCola N (WM & WT)

Translate these English sentences into wffs (use “u” for “you”).

Don’t act to do A without believing that A would be all right.

∼(u:Au • ∼u:RAu)

1. You want Al to sit down. [Use a for “Al” and Sx for “x sits down.”]

2. Believe that Al is sitting down.

3. You believe that Al ought to sit down.

4. Believe that Al intends to sit down.

5. Desire that Al sit down.

6. Eat nothing. [Use Exy for “x eats y.”]

7. Resolve to eat nothing.

8. You fall down, but you don’t act (in order) to fall down. [Fx]

9. You act to kick the goal, but you don’t in fact kick the goal. [Kx]

10.If you believe that you ought to do A, then do A.

11.Don’t combine believing that you ought to do A with not acting to do A.

12.Do A, only if you want everyone to do A. (Act only as you’d want everyone to act.) [This is a crude form of Kant’s formula of universal law.]

13.If X does A to you, then do A to X. (Treat others as they treat you.) [Use Axy. This principle entails “If X knocks out your eye, then knock out X’s eye.”]

14.If you do A to X, then X will do A to you. (People will treat you as you treat them.) [This is often confused with the golden rule.]

15.If you want X to do A to you, then do A to X. (Treat others as you want to be treated.) [This is the “literal golden rule.”]

16.Don’t combine acting in order to do A to X with wanting X not to do A to you. 0301

13.4 Willing proofs

Besides inconsistency in beliefs, there’s also inconsistency in will: I might have inconsistent resolutions, violate ends-means consistency, or have moral beliefs that conflict with how I live. Belief logic can generate norms about consistent willing; thus it deals with practical reason as well as theoretical reason.

Except for having more underlining, proofs with willing formulas work like before. Here’s a proof of “Don’t combine believing that it’s wrong for you to do A with acting to do A” (these parts clash – since if the believing is correct then the acting is wrong, and if the acting is correct then the believing is wrong):

fig0103

The second part of the formula is expressed as “u:Au” (which is about what you try or intend to do) and not “Au” (which is about what you do, perhaps accidentally). The faulty translation “∼(u:O∼Au • Au)” forbids unintentionally doing what one thinks is wrong; there’s no inconsistency in this, except perhaps externally. The correct version forbids this combination: thinking that A is wrong and at the same time acting with the intention of doing A.

13.4a Exercise: LogiCola OW

Say whether valid (and give a proof) or invalid (no refutation necessary).

∴ (u:O∼Au ⊃ ∼u:Au)

· [ ∴ (u:O∼Au ⊃ ∼u:Au) Invalid

· * 1 asm: ∼(u:O∼Au ⊃ ∼u:Au)

· 2 ∴ u:O∼Au {from 1}

· 3 ∴ u:Au {from 1}

· 4 u ∴ Au {from 3}

This says: “If you believe it’s wrong for you to do A, then don’t act to do A”; this leads to problems because it lacks the correct don’t-combine form and because your belief may be mistaken. Maybe you believe that it’s wrong to treat people fairly; then this formula tells you not to act to treat them fairly.

· ∴ ∼(u:Au:∼A)

· ∴ u:(Ba ⊃ RBa)

· ∴ (u:Bau:∼Ba)

· ∴ ∼((u:(A ⊃ B) • u:A) • ∼u:B)

· u:(x)OAx

u:Au 0302

· ∼u:Au

∴ ∼u:OAu

· ∴ u:(OAu ⊃ Au)

· ∴ (u:Au ∨ ∼u:OAu)

· u:Au

∴ ∼u:O∼Au

· ☐(AB)

∴ ∼(u:OA • ∼u:B)

13.4b Exercise: LogiCola OW

First appraise intuitively. Then translate into logic and say whether valid (and give a proof) or invalid (no refutation necessary).

1. ∴ Don’t combine believing that everyone ought to do A with not acting/resolving to do A yourself. [This is belief logic’s version of “Practice what you preach.”]

2. ∴ Don’t combine resolving to eat nothing with acting to eat this. [Use Exy and t.]

3. “Attain this end” entails “If taking this means is needed to attain this end, then take this means.”

∴ Don’t combine (1) wanting to attain this end and (2) believing that taking this means is needed to attain this end and (3) not acting to take this means. [Use E for “You attain this end,” N for “Taking this means is needed to attain this end,” M for “You take this means,” and u. The conclusion is an ends-means consistency imperative; you violate it if you want to become a doctor and believe that studying is needed for you to do this and yet you don’t act to study.]

4. “Attain this end” entails “If taking this means is needed to attain this end, then take this means.”

∴ If you want to attain this end and believe that taking this means is needed to attain this end, then act to take this means. [Use E, N, M, and u. This formulation could tell people with evil ends to do evil things.]

5. ∴ Don’t accept “For all x, it’s wrong for x to kill,” without being resolved that if killing were needed to save your family, then you wouldn’t kill. [Kx, N]

6. ∴ Don’t accept “For all x, it’s wrong for x to kill,” without it being the case that if killing were needed to save your family then you wouldn’t kill. [Use Kx and N. A draft board challenged a pacifist friend of mine, “If killing were needed to save your family, then would you kill?” My friend answered, “I don’t know – I might lose control and kill (it’s hard to predict what you’ll do in a panic situation); but I now firmly hope and resolve that I wouldn’t kill.” Maybe my friend didn’t satisfy this present formula; but he satisfied the previous one.]

7. ∴ Don’t combine accepting “It’s wrong for Bob to do A” with wanting Bob to do A.

8. ∴ Don’t combine believing that the state ought to execute all murderers with not desiring that if your friend is a murderer then the state execute your friend. [Use s for “the state,” Exy for “x executes y,” Mx for “x is a murderer,” f for “your friend,” and u for “you.”]

9. ∴ Don’t combine acting to do A with not accepting that A is all right.

10.∴ If you act to do A, then accept that act A is all right.

11.∴ Don’t combine acting to do A with not accepting that A is obligatory. 0303

12.Believe that you ought to do A.

∴ Act to do A.

13.“It’s all right for you to do A” entails “It’s obligatory that everyone do A.”

∴ Don’t combine acting to do A with not willing that everyone do A. [The conclusion is a crude version of Kant’s formula of universal law. To see that the premise and conclusion are questionable, substitute “become a doctor” for “do A” in both. We’ll see a better version of the formula in the next chapter.]

13.5 Rationality translations

Beliefs can be “evident” or “reasonable” for a given person. As I shade my eyes from the bright sun, my belief that it’s sunny is evident; it’s very solidly grounded. As I hear a prediction of rain, my belief that it will rain is reasonable; my belief accords with reason but isn’t well-grounded enough to be evident. “Evident” expresses a higher certitude than does “reasonable.” We’ll symbolize these notions as follows:

· A is evident to you

· = Ou:A

· It’s obligatory (rationally required) that you believe A

· Insofar as intellectual considerations are concerned (including your experiences), you ought to believe A

· A is reasonable for you to believe

· = Ru:A

· It’s all right (rationally permissible) that you believe A

· Insofar as intellectual considerations are concerned (including your experiences), it would be all right for you to believe A

Neither entails that you believe A; to say that a proposition A that you believe is evident / reasonable, we’ll use “(u:A • Ou:A)” / “(u:A • Ru:A).” “Evident” and “reasonable” are relative to an individual person; “It’s raining” might be evident to someone outside but not to someone inside in a windowless room.

Here are further translations:

· It would be unreasonable for you to believe A

· = ∼Ru:A

· = It’s obligatory that you not believe A

· = O∼u:A

· It would be reasonable for you to take no position on A

· = R(∼u:A • ∼u:∼A)

· It’s evident to you that if A then B

· = Ou:(A ⊃ B)

· If it’s evident to you that A, then it’s evident to you that B

· = (Ou:A ⊃ Ou:B)

· You ought not to combine believing A with believing not-A

· = O∼(u:A • u:∼A)

Since “O” and “R” attach only to imperatives, “Ou:A” and “Ru:A” aren’t wffs.

We can almost define “knowledge” simply as “evident true belief”: 0304

· You know that A

· = uKA

· = (Ou:A • (A • u:A))

· A is evident to you, A is true, and you believe A

Knowing requires more than just true belief; if you guess right, you have true belief without knowledge. Knowledge must be well-grounded; more than just being reasonable (permitted by the evidence), it must be evident (required by the evidence). The claim that knowledge is evident true belief is plausible. But there are cases (like example 10 of §13.6b) where we have one but not the other. So this definition of “knowledge” is flawed; but it’s still a useful approximation.

13.5aExercise: LogiCola N (RM & RT)

Translate these English sentences into wffs. When an example says a belief is evident or reasonable, but doesn’t say to whom, assume it means evident or reasonable to you.

You ought to want Al to sit down.

Ou:Sa

We can paraphrase the sentence as “It’s obligatory that you say in your heart ‘Would that Al sit down.’”

1. You ought to believe that Al is sitting down.

2. It’s evident to you that Al is sitting down.

3. It’s reasonable for you to believe that Al ought to sit down.

4. Belief in God is reasonable (for you). [G]

5. Belief in God is unreasonable for everyone.

6. It’s not reasonable for you to believe that belief in God is unreasonable for everyone.

7. Belief in God is reasonable only if “There is a God” is logically consistent.

8. You ought not to combine believing that there is a God with not believing that “There is a God” is logically consistent.

9. You ought not to combine believing that you ought to do A with not acting to do A.

10.You know that x = x. [Use the flawed definition of knowledge given previously.]

11.If agnosticism is reasonable, then theism isn’t evident. [Agnosticism = not believing G and not believing not-G; theism = believing G.]

12.You have a true belief that A. [You believe that A, and it’s true that A.]

13.You mistakenly believe A.

14.It would be impossible for you mistakenly to believe A.

15.A is evident to you, if and only if it would be impossible for you mistakenly to believe A. [This idea is attractive but quickly leads to skepticism.]

16.It’s logically possible that you have a belief A that’s evident to you and yet false.

17.It’s evident to all that if they doubt then they exist. [Dx, Ex]

18.If A entails B, and B is unreasonable, then A is unreasonable.

19.It’s permissible for you to do A, only if you want everyone to do A.

20.If you want X to do A to you, then you ought to do A to X. [Use Axy. This one and the next are versions of the golden rule.]

21.You ought not to combine acting to do A to X with wanting X not to do A to you. 0305

22.It’s necessary that, if you’re in pain, then it’s evident to you that you’re in pain. [Use Px. This claims that “I’m in pain” is a self-justifying belief. Many think that there are two kinds of self-justifying belief: those of experience (as in this example) and those of reason (as in the next example).]

23.It’s necessary that, if you believe that x = x, then it’s evident to you that x = x. [Perhaps believing “x = x” entails understanding it, and this makes it evident.]

24.If you have no reason to doubt your perceptions and it’s evident to you that you believe that you see a red object, then it’s evident to you that there is an actual red object. [Use Dx for “x has reason to doubt his or her perceptions,” Sx for “x sees a red object,” and R for “There is an actual red object.” Roderick Chisholm claimed that we need evidential principles like this (but more complex) to show how beliefs about external objects are based on beliefs about perceptions.]

25.If you have no reason to doubt Jenny’s sincerity and it’s evident to you that she shows pain behavior, then it’s evident to you that Jenny feels pain. [Use Bx, Dx, Fx, and j. This exemplifies an evidential principle about knowing other minds.]

13.6Rationality proofs

Deontic belief proofs, while not requiring further inference rules, often use complex world prefixes like “Du” or “Duu.” Here’s a proof of a conscientiousness principle, “You ought not to combine believing that it’s wrong for you to do A with acting to do A”:

fig0104

We get to line 5 using propositional and deontic rules. Lines 6 and 7 follow using rule B+. Here we write belief world prefix “u” after the deontic world prefix “D” used in lines 3 to 5; world Du is a belief world of u that depends on what deontic world D tells u to accept. We soon get a contradiction.

“O∼(u:O∼Auu:Au)” is a formal ethical principle – an ethical principle that can be formulated using the abstract notions of our logical systems plus variables (like “u” and “A”) that stand for any person and action. The next chapter will focus on another formal ethical principle – the golden rule. 0306

13.6aExercise: LogiCola O (R & M)

Say whether valid (and give a proof) or invalid (no refutation necessary).

Ru:O(AB)

∴ Ru:OA

fig0105

(If you can follow this example, you needn’t fear proofs involving complex world prefixes.)

1. ☐(A ⊃ B)

∼Ru:B

∴ ∼Ru:A

2. O∼u:A

∴ Ou:∼A

3. R(∼u:A • ∼u:∼A)

∴ ∼Ou:A

4. Ru:∼A

∴ R∼u:A

5. Oa:(C • D)

∴ Ob:C

6. ∴ O∼(u:A • ∼u:◇A)

7. ∴ (Ru:A ⊃ ◇A)

8. ☐(A ⊃ B)

∴ (R∼u:B ⊃ Ru:∼A)

9. Ru:OAu

∴ Ru:◇Au

10.Ou:(A ⊃ OBu)

∴ ∼(u:A • ∼u:Bu)

13.6bExercise: LogiCola O (R & M)

First appraise intuitively. Then translate into logic and say whether valid (and give a proof) or invalid (no refutation necessary). Use G for “There is a God” and u for “you.” When an example says a belief is evident or reasonable, but don’t say to whom, assume it means evident or reasonable to you.

1. Theism is evident.

∴ Atheism is unreasonable. [Theism = believing G; atheism = believing not-G.]

2. Theism isn’t evident.

∴ Atheism is reasonable.

3. ∴ You ought not to combine believing you ought to do A with not acting to do A.

4. ∴ If you believe you ought to do A, then you ought to do A.

5. “All men are endowed by their creator with certain unalienable rights” is evident.

“All men are endowed by their creator with certain unalienable rights” entails

“There is a creator.”

∴ “There is a creator” is evident. [Use E and C. The opening lines of the US Declaration of Independence claim E to be self-evident.] 0307

6. It would be reasonable for you to believe that A is true.

It would be reasonable for you to believe that B is true.

∴ It would be reasonable for you to believe that A and B are both true.

7. “If I’m hallucinating, then physical objects aren’t as they appear to me” is evident to me.

It’s not evident to me that I’m not hallucinating.

∴ It’s not evident to me that physical objects are as they appear to me. [Use H, P, and i. This argument for skepticism is essentially from Descartes.]

8. “If I’m hallucinating, then physical objects aren’t as they appear to me” is evident to me.

If I have no special reason to doubt my perceptions, then it’s evident to me that physical objects are as they appear to me.

I have no special reason to doubt my perceptions.

∴ It’s evident to me that I’m not hallucinating. [Use H, P, D, and i. This is John Pollock’s answer to the previous argument.]

9. It’s evident to you that taking this means is needed to attain this end.

“Attain this end” entails “If taking this means is needed to attain this end, then take this means.”

∴ You ought not to combine wanting to attain this end with not acting to take this means. [Use N for “Taking this means is needed to attain this end,” E for “You attain this end,” M for “You take this means,” and u.]

10.Al believes that Smith owns a Ford.

It’s evident to Al that Smith owns a Ford.

Smith doesn’t own a Ford.

Smith owns a Chevy.

Al believes that Smith owns a Ford or a Chevy.

Al doesn’t know that Smith owns a Ford or a Chevy.

∴ Al has an evident true belief that Smith owns a Ford or a Chevy; but Al doesn’t know that Smith owns a Ford or a Chevy. [Use a for “Al,” F for “Smith owns a Ford,” C for “Smith owns a Chevy,” and K for “Al knows that Smith owns a Ford or a Chevy.” This argument from Edmund Gettier attacks the definition of knowledge as evident true belief.]

11.It’s evident to you that if it’s all right for you to hit Al then it’s all right for Al to hit you.

∴ Don’t combine acting to hit Al with believing that it would be wrong for Al to hit you. [Use Hxy, u, and a. The premise is normally true; but it could be false if you and Al are in different situations (maybe Al needs to be hit to dislodge food he’s choking on). The conclusion resembles the golden rule.]

12.∴ It’s reasonable to want A to be done, only if it’s reasonable to believe that A would be all right.

13.It’s evident that A is true.

∴ A is true. 0308

14.It’s reasonable to combine believing that there is a perfect God with believing T.

T entails that there’s evil in the world.

∴ It’s reasonable to combine believing that there is a perfect God with believing that there’s evil in the world. [Use G, T, and E. Here T (for “theodicy”) is a reasonable explanation of why God permits evil, perhaps “The world has evil because God, who is perfect, wants us to make significant free choices to struggle to bring a half-completed world toward its fulfillment; moral evil comes from the abuse of human freedom and physical evil from the half-completed state of the world.”]

15.It’s evident to you that if there are moral obligations then there’s free will.

∴ Don’t combine accepting that there are moral obligations with not accepting that there’s free will. [M, F]

16.Theism is reasonable.

∴ Atheism is unreasonable.

17.Theism is evident.

∴ Agnosticism is unreasonable. [Agnosticism = not believing G and not believing not-G.]

18.∴ It’s reasonable for you to believe that God exists, only if “God exists” is consistent. [Belief logic regards a belief as “reasonable” only if in fact it’s consistent. In a more subjective sense, someone could “reasonably” believe a proposition that’s reasonably but incorrectly taken to be consistent.]

19.∴ If A is unreasonable, then don’t believe A.

20.You ought not to combine accepting A with not accepting B.

∴ If you accept A, then accept B.

21.∴ You ought not to combine wanting A not to be done with believing that A would be all right.

22.It’s reasonable not to believe that there is an external world.

∴ It’s reasonable to believe that there’s no external world. [E]

23.It’s reasonable to believe that A ought to be done.

∴ It’s reasonable to want A to be done.

24.∴ Either theism is reasonable or atheism is reasonable.

25.It’s evident to you that if the phone is ringing then you ought to answer it. It’s evident to you that the phone is ringing.

∴ Act on the imperative “Answer the phone.” [P, Ax]

26.A entails B.

Believing A would be reasonable.

∴ Believing B would be reasonable.

27.Atheism isn’t evident.

∴ Theism is reasonable. 0309

28.Atheism is unreasonable.

Agnosticism is unreasonable.

∴ Theism is evident.

29.A entails B.

You accept A.

It’s unreasonable for you to accept B.

∴ Don’t accept A, and don’t accept B.

30.It would be reasonable for anyone to believe A.

∴ It would be reasonable for everyone to believe A. [Imagine a controversial issue where everyone has the same evidence. Could it be more reasonable for the community to disagree? If so, the premises of this argument might be true but the conclusion false.]

13.7A sophisticated system

The system of belief logic that we’ve developed is oversimplified in three ways. We’ll now sketch a more sophisticated system.

First, “One ought to be consistent” requires qualification. For the most part, we do have a duty to be consistent. But, since “ought” implies “can,” this duty is nullified when we’re unable to be consistent; such inability can come from emotional turmoil1 or our incapacity to grasp complex logical relations. And the obligation to be consistent can be overridden by other factors; if Dr Evil would destroy the world unless we were inconsistent in some respect, then surely our duty to be consistent would be overridden. And the duty to be consistent applies, when it does, only to persons; yet our principles so far would entail that rocks and trees also have a duty to be consistent.

1 Perhaps you see (and believe) that your wife was in a car that blew up and you believe that anyone in such a car would be dead – but you’re psychologically unable at the moment to believe that your wife is dead. Then you’re psychologically unable at the moment to be consistent about this.

For these reasons, it would be better to qualify our “You ought to be consistent” principle, as in the following rough formulation:1

1 Section 2.3 of my Formal Ethics (London: Routledge, 1996) has additional qualifications.

If you are a person able to be consistent in certain ways, grasp (or should grasp) the logical relationships, and your being consistent wouldn’t have disastrous consequences, then you ought to be consistent in these ways.

Let’s abbreviate the qualification in the box (“You are …”) as “Qu.” Then we could reformulate our inference rules by adding a “Qu” premise:

B+

u:A, Qu → u ∴ A,

use any string of u’s

B–

u:A, Qu → u ∴ ∼A,

use a new string of u’s

0310 With these changes, we’d need plentiful “Qu” provisos in the previous sections.

A second problem is that our system can prove a conjunctivity principle:

O∼((u:A • u:B) • ∼u:(A • B))

You ought not to combine believing A and believing B and not believing A-and-B

This leads to questionable results in the lottery paradox. Suppose six people have an equal chance to win a lottery. You know that one of the six will win; but the probability is against any given person winning. Presumably it could be reasonable for you to accept statements 1 to 6 without also accepting statement 7 (which means “None of the six will win”):

1. Person 1 won’t win.

2. Person 2 won’t win.

3. Person 3 won’t win.

4. Person 4 won’t win.

5. Person 5 won’t win.

6. Person 6 won’t win.

7. Person 1 won’t win, person 2 won’t win, person 3 won’t win, person 4 won’t win, person 5 won’t win, and person 6 won’t win.

But multiple uses of our conjunctivity principle would entail that one ought not to accept statements 1 to 6 without also accepting their conjunction 7. So the conjunctivity principle, which is provable using our rules B+ and B–, sometimes leads to questionable results.

I’m not completely convinced that it’s reasonable to accept statements 1 to 6 but not accept 7. If it is reasonable, then we have to reject the conjunctivity principle and modify our consistency ideal. Let’s call the ideal of “completely consistent” defined in §13.2 broad consistency. Perhaps we should strive, not for this, but for narrow consistency. Let S be the set of indicatives and imperatives that X accepts; then X is narrowly consistent if and only if:

1. every pair of items of set S is logically consistent, and

2. X accepts whatever follows from any single item of set S.

Believing the six lottery statements but not their conjunction is narrowly consistent but not broadly consistent.

To have our rules mirror the ideal of narrow consistency, we’d add to rules B+ and B– that any belief world prefix used in these rules cannot have occurred more than once in earlier lines. With this change, only a few arguments in this chapter would cease being provable. And many of these could be salvaged by adding an additional conjunctivity premise like the following (which would be true in many cases): “You ought not to combine believing A and believing B and not believing A-and-B.” Conjunctivity presumably fails only in rare lottery-type cases.

The third problem is that we’ve been translating these two statements the same way, as “Ou:A,” even though they don’t mean the same thing: 0311

“You ought to believe A” ≠ “A is evident to you”

Suppose you ought to trust your wife and give her the benefit of every reasonable doubt; you ought to believe what she says, even though the evidence isn’t so strong as to make this belief evident. Here there’s a difference between “ought to believe” and “evident.” And so it may be better to use a different symbol (perhaps “O*”) for “evident”:

· You ought to believe A

· = Ou:A

· All things considered, you ought to believe A

· A is evident to you

· = O*u:A

· Insofar as intellectual considerations are concerned (including your experiences), you ought to believe A

“O” is an all-things-considered “ought,” while “O*” is a prima facie “ought” that considers only the intellectual basis for the belief. If we added “O*” to our system, we’d need corresponding deontic inference rules for it. Since “O*A” is a prima facie “ought,” it wouldn’t entail the corresponding imperative or commit one to action; so we’d have to weaken the rule for dropping “O*” so we couldn’t derive “u:A” from “O*u:A.”

These three refinements would overcome some problems but make our system much harder to use. We seldom need the refinements. So we’ll keep the naïve belief logic of earlier sections as our “official system” and build on it in the next chapter. But we’ll be conscious that this system is oversimplified in various ways. If and when the naïve system gives questionable results, we can appeal to the sophisticated system to clear things up.

If you find an error or have any questions, please email us at admin@erenow.org. Thank you!