5

Inductive Reasoning

Much of our reasoning deals with probabilities. We observe patterns and conclude that, based on these, such and such a belief is probably true. This is inductive reasoning.

5.1 The statistical syllogism

The Appalachian Trail (AT), a 2,160-mile footpath from Georgia to Maine in the eastern US, has a series of lean-to shelters. Suppose we backpack on the AT and plan to spend the night at Rocky Gap Shelter. We’d like to know beforehand whether there’s water (a spring or stream) close by. If we knew that all AT shelters have water, or that none do, we could reason deductively:

· All AT shelters have water.

· Rocky Gap is an AT shelter.

· ∴ Rocky Gap has water.

· No AT shelters have water.

· Rocky Gap is an AT shelter.

· ∴ Rocky Gap doesn’t have water.

Both are deductively valid. Both have a tight connection between premises and conclusion; if the premises are true, the conclusion has to be true. Deductive validity is “all or nothing.” Deductive arguments can’t be “half-valid,” nor can one be “more valid” than another.

In fact, most of the shelters have water, but a few don’t. Of the shelters that I’ve visited, roughly 90 percent (depending on season and rainfall) have had water. If we knew that 90 percent had water, we could reason inductively:

· 90 percent of AT shelters have water.

· Rocky Gap is an AT shelter.

· That’s all we know about the matter.

· ∴ Probably Rocky Gap has water.

This is a strong inductive argument. Relative to the premises, the conclusion is a good bet. But it could turn out false, even though the premises are all true.

The “That’s all we know about the matter” premise means “We have no further information that influences the conclusion’s probability.” Suppose we just met a thirsty backpacker complaining that the water at Rocky Gap had dried up; 0076 that would change the conclusion’s probability. The premise claims that we have no such further information.

Inductive arguments differ from deductive ones in two ways. (1) Inductive arguments vary in how strongly the premises support the conclusion; “99 percent of AT shelters have water” supports the conclusion more strongly than does “60 percent of AT shelters have water.” We have shades of gray here – not the black and white of deductive validity/invalidity. (2) Even a strong inductive argument has only a loose connection between premises and conclusion. The premises make the conclusion at most only highly probable; the premises might be true while the conclusion is false. Inductive reasoning is a form of guessing based on recognizing and extending known patterns and resemblances.

So a deductive argument claims that it’s logically necessary that if the premises are all true, then so is the conclusion. An inductive argument claims that it’s likely (but not logically necessary) that if the premises are all true, then so is the conclusion. This chapter focuses on inductive arguments.

If we refine our conclusion to specify a numerical probability, we get the classic statistical syllogism form:

Statistical Syllogism

· N percent of A’s are B’s.

· X is an A.

· That’s all we know about the matter.

· ∴ It’s N percent probable that X is a B.

· 90 percent of AT shelters have water.

· Rocky Gap is an AT shelter.

· That’s all we know about the matter.

· ∴ It’s 90 percent probable that Rocky Gap has water.

Here’s another example:

· 50 percent of coin tosses are heads.

· This is a coin toss.

· That’s all we know about the matter.

· ∴ It’s 50 percent probable that this is heads.

Suppose that all we know affecting the probability of the toss being heads is that 50 percent of coin tosses are heads and that this is a coin toss. Then it’s 50 percent probable to us that the toss is heads. This holds if we hadn’t yet tossed the coin, or if we tossed it but didn’t yet know how it landed. The matter is different if we know how it landed. Then it’s no longer just 50 percent probable to us that it’s heads; rather, we know that it’s heads or that it’s tails.

Statistical syllogisms apply most cleanly if we know little about the subject. Suppose we know these two facts about Michigan’s football team:

· Michigan has first down and runs 70 percent of the time on first down.

· Michigan is behind and passes 70 percent of the time when it’s behind. 0077

Relative to the first fact, Michigan probably will run. Relative to the second fact, Michigan probably will pass. But it’s unclear what Michigan probably will do relative to both facts. It gets worse if we add facts about the score, the time left, and the offensive formation. Each fact by itself may lead to a clear conclusion about what Michigan probably will do; but the combination muddies the issue. Too much information can confuse us when we apply statistical syllogisms.

Chapter 1 distinguished valid from sound deductive arguments. Valid asserts a correct relation between premises and conclusion, but says nothing about the truth of the premises; sound includes both “valid” and “has true premises.” It’s convenient to have similar terms for inductive arguments. Let’s say that an argument is strong inductively if the conclusion is probable relative to the premises. And let’s say that an argument is reliable inductively if it’s strong and has true premises. So then:

· With DEDUCTIVE ARGUMENTS: a correct premise–conclusion link makes the argument VALID; and VALID plus true premises makes the argument SOUND.

· With INDUCTIVE ARGUMENTS: a correct premise–conclusion link makes the argument STRONG; and STRONG plus true premises makes the argument RELIABLE.

Here’s a very strong inductive argument that isn’t reliable:

· Michigan loses 99 percent of the times it plays.

· Michigan is playing today.

· That’s all we know about the matter.

· ∴ Probably Michigan will lose today.

This is very strong, because relative to the premises the conclusion is very probable. But the argument isn’t reliable, since premise 1 is false.

5.2 Probability calculations

Sometimes we can calculate probabilities precisely. Coins tend to land heads half the time and tails the other half; so each coin has a 50 percent chance of landing heads and a 50 percent chance of landing tails. Suppose we toss two coins. There are four possible combinations of heads (H) and tails (T) for the two coins:

HH HT TH TT

Each case is equally probable. So our chance of getting two heads is 25 percent (.25 or ¼), since it happens in 1 out of 4 cases. Here’s the rule (where “prob” is short for “the probability” and “favorable cases” are those in which A is true): 0078

This rule holds if every case is equally likely:

Prob of A = the number of favorable cases / the total number of cases

Our chance of getting at least one head is 75 percent (.75 or ¾), since it happens in 3 of 4 cases.

With odds, the ratio concerns favorable and unfavorable cases (“unfavorable cases” are those in which A is false). The odds are in your favor if the number of favorable cases is greater (then your probability is greater than 50 percent):

The odds in favor of A = the number of favorable cases / the number of unfavorable cases

So the odds are 3 to 1 in favor of getting at least one head – since it happens in 3 cases and fails in only 1 case. The odds are against you if the number of unfavorable cases is greater (so your probability is less than 50 percent):

The odds against A = the number of unfavorable cases / the number of favorable cases

Odds are usually given in whole numbers, with the larger number first. We wouldn’t say “The odds are 1 to 3 in favor of getting two heads”; rather, we’d put the larger number first and say “The odds are 3 to 1 against getting two heads.” Here are examples of how to convert between odds and probability:

· The odds are even (1 to 1) that we’ll win = The probability of our winning is 50 percent.

· The odds are 7 to 5 in favor of our winning = The probability of our winning is 7/12 (7 favorable cases out of 12 total cases, or 58.3 percent).

· The odds are 7 to 5 against our winning = The probability of our winning is 5/12 (5 favorable cases out of 12 total cases, 41.7 percent).

· The probability of our winning is 70 percent = The odds are 7 to 3 in favor of our winning (70 percent favorable to 30 percent unfavorable).

· The probability of our winning is 30 percent = The odds are 7 to 3 against our winning (70 percent unfavorable to 30 percent favorable).

We’ll now learn some rules for calculating probabilities. The first two rules are about necessary truths and self-contradictions: 0079

If A is a necessary truth: Prob of A = 100 percent.

If A is a self-contradiction: Prob of A = 0 percent.

Our chance of a specific coin being either heads or not heads is 100 percent. And our chance of it being both heads and not heads (at one time) is 0 percent.

This next rule relates the probability of a given event happening to the probability of that event not happening:

Prob of not-A = 100 percent − prob of A.

So if our chance of getting two heads is 25 percent, then our chance of not getting two heads is 75 percent (100 percent − 25 percent).

The next rule concerns events that are independent of each other, in that the occurrence of one doesn’t make the occurrence of the other any more or any less likely (the first coin being heads, for example, doesn’t make it any more or any less likely that the second coin will be heads):

If A and B are independent:

Prob of (A and B) = prob of A • prob of B.

Probabilities multiply with AND. So our chance of throwing two heads (25 percent) and then throwing two heads again (25 percent) is 6.25 percent (25 percent • 25 percent).

This next rule holds for events that are mutually exclusive, in that they can’t both happen together:

If A and B are mutually exclusive:

Prob of (A or B) = prob of A + prob of B.

Probabilities add with OR. It can’t happen that we throw two heads and also (on the same toss of two coins) throw two tails. The probability of either event is 25 percent. So the probability of one or the other happening (getting two heads or two tails) is 50 percent (25 percent + 25 percent). When the two events aren’t mutually exclusive, we use this more complex rule:

This holds even if A and B aren’t mutually exclusive:

Prob of (A or B) = Prob of A + prob of B − prob of (A and B).

Suppose we calculate the probability of getting at least one head when we flip 0080 two coins. Coin 1 being heads and coin 2 being heads aren’t mutually exclusive, since they might both happen together; so we apply the more complex rule. The chance of coin 1 being heads or coin 2 being heads = the chance of coin 1 being heads (50 percent) + the chance of coin 2 being heads (50 percent) − the chance of coin 1 and coin 2 both being heads (25 percent). So our chance of getting at least one head is 75 percent (50 + 50 − 25). If A and B are mutually exclusive, then the probability of (A and B) = 0 and the simpler rule gives the same result.

Suppose we throw two dice. There are six equally probable possibilities for each die. Here are the possible combinations and resulting totals (the numbers on the left are for the first die, the numbers on the top are for the second die, and the other numbers are the totals):

These 36 combinations each have an equal 1/36 probability. The chance of getting 12 is 1/36, since we get 12 in only 1 of 36 cases. The chance of getting 11 is 1/18 (2/36) – since we get 11 in 2 of 36 cases. Similarly, we have a 1/6 (6/36) chance of getting 10 or higher, and a 5/6 (30/36) chance of getting 9 or lower.

Suppose we have a standard deck of 52 cards. What’s our chance of getting 2 aces when dealt 2 cards? We might think that, since 1/13 of the cards are aces, our chance of getting two aces is 1/169 (1/13 • 1/13). But that’s wrong. Our chance of getting an ace on the first draw is 1/13, since there are 4 aces in the 52 cards, and 4/52 = 1/13. But if we get an ace on the first draw, then there are only 3 aces left in the 51 cards. So our chance of getting a second ace is 1/17 (3/51). Thus, our chance of getting 2 aces is 1/221 (1/13 • 1/17), or about 0.45 percent.

Here the events aren’t independent. Getting an ace on the first card reduces the number of aces left and our chance of drawing an ace for the second card. This is unlike coins, where getting heads on one toss doesn’t affect our chance of getting heads on the next toss. If events A and B aren’t independent, we need this rule for determining the probability of the conjunction (A and B):

This holds even if A and B aren’t independent:

Prob of (A and B) = Prob of A • (prob of B after A occurs).

This reflects the reasoning about our chance of getting 2 aces from a 52-card deck. What’s our chance with a double 104-card deck? Our chance of getting a first ace is again 1/13 (since there are 8 aces among the 104 cards, and 8/104 = 1/13). After we get a first ace, there are 7 aces left in the 103 cards, and so our chance of a second ace is 7/103. So the probability of getting a first ace and then 0081 a second ace = 1/13 (the probability of the first ace) • 7/103 (the probability of the second ace). This works out to 7/1339 (1/13 • 7/103), or about 0.52 percent. So our chance of getting 2 aces when dealt 2 cards from a double 104-card deck is about 0.52 percent (or slightly better than the 0.45 with a standard deck).

Mathematically fair betting odds are in reverse proportion to probability. Suppose we bet on whether, in drawing 2 cards from a standard 52-card deck, we’ll draw 2 aces. There’s a 1/221 chance of this, so the odds are 220 to 1 against us. If we bet $1, we should get $220 if we win. If we play for a long time under such betting odds, our gains and losses will likely roughly equalize. In a casino, the house takes its cut and so we get a lower payoff. So if we play there a long time under such odds, probably we’ll lose and the casino will win. That’s why Las Vegas casinos look like the palaces of emperors.

5.2a Exercise: LogiCola P (P, O, & C)

Work out the following problems. A calculator is useful for some of them.

You’re playing blackjack and your first card is an ace. What’s your chance of getting a card worth 10 (a 10, jack, queen, or king) for your next card? You’re using a standard 52-card deck.

There are 16 such cards (one 10, J, Q, and K for each suit) from 51 remaining cards. So your chance is 16/51 (about 31.4 percent).

1. What would the answer to the sample problem be with a double 104-card deck?

2. Suppose the Cubs and Mets play baseball today. There’s a 60 percent chance of rain, which would cancel the game. If the teams play, the Cubs have a 20 percent chance of winning. What chance do the Cubs have of winning today?

3. You’re tossing coins. You tossed 5 heads in a row using a fair coin. What’s the probability now that the next coin will be heads?

4. You’re about to toss 6 coins. What’s the probability that all 6 will be heads?

5. Suppose there’s an 80 percent chance that the winner of the Michigan versus Ohio State game will go to the Rose Bowl, a 60 percent chance that Michigan will beat Ohio State, and a 30 percent chance that Michigan will win the Rose Bowl if it goes. Then what’s the probability that Michigan will win the Rose Bowl?

6. Suppose you bet $10 that Michigan will win the Rose Bowl. Assuming the probabilities of the last example and mathematically fair betting odds, how much money should you win if Michigan wins the Rose Bowl?

7. You’re playing blackjack and get an ace for the first card. You know that the cards used on the only previous hand were a 5, a 6, two 7’s, and two 9’s, and that all these are in the discard pile. What’s your chance of getting a card worth 10 (a 10, jack, queen, or king) for the next card? You’re using a standard 52-card deck.

8. What would the answer to the last problem be with a double 104-card deck?

9. You’re throwing a pair of dice. Your sister bets you even money that you’ll throw an even number (adding both together). Is she playing you for a sucker?

10.Your sister is throwing a pair of dice. She says, “I bet I’ll throw a number divisible by three.” What are the mathematically fair betting odds? 0082

11.You’re dealt five cards: two 3s, a 4, a 6, and a 7. If you get another card, what’s the probability that it will be a 5? What’s the probability that it will be a 3?

12.You’re at a casino in Las Vegas and walk by a $1 slot machine that says “Win $2,000!” Assume that this is the only way you can win and that it gives mathematically fair odds or worse. What’s your chance of winning if you deposit $1?

13.What’s the probability, ignoring leap-year complications, that both your parents have their birthday on the same day of the year (whatever day that may be)?

14.Our football team, Michigan, is 2 points behind with a few seconds left. We have the ball, fourth and two, on the Ohio State 38. We could have the kicker try a long field goal, which would win the game. The probability of kicking this goal is 30 percent. Or we could try to make a first down and then kick from a shorter distance. There’s a 70 percent probability of making a first down and a 50 percent probability of making the shorter field goal if we make the first down. Which alternative gives us a better chance to make the field goal?

15.Our team, Michigan, is 2 points ahead with a minute left. Ohio State is going for it on fourth down. It’s 60 percent probable that they’ll pass, and 40 percent probable that they’ll run. We can defense the pass or defense the run. If we defense the pass, then we’re 70 percent likely to stop a pass but only 40 percent likely to stop a run. If we defense the run, then we’re 80 percent likely to stop a run but only 50 percent likely to stop a pass. What should we do?

5.3 Philosophical questions

We’ll now consider four philosophical questions on probability. Philosophers disagree about how to answer these questions.

1. Are the ultimate scientific laws governing the universe deterministic or probabilistic in nature?

Some think all ultimate scientific laws are deterministic; we use probability only because we lack knowledge. Suppose we knew all scientific laws and the complete state of the world at a given time. Then we could in principle infallibly predict whether the coin will come up heads, whether it will rain three years from today, and who will win the World Cup in 30 years. This is the thesis of determinism.

Others say that some or all of the ultimate laws governing our world are probabilistic. Such laws say that under given conditions a result will probably obtain, but not that it must obtain. The world is a dice game.

The empirical evidence on this issue is inconclusive. Quantum physics today embraces probabilistic laws but could someday return to deterministic laws. The issue is complicated by the controversy over whether determinism is an empirical or an a priori issue (§3.7); some think reason (not experience) gives us certainty that the world is deterministic. 0083

· 2. What does “probable” mean? And can every statement be assigned a numerical probability relative to given evidence?

“Probable” has various senses. “The probability of heads is 50 percent” could be taken in at least four ways:

· Ratio of observed frequencies: We’ve observed that coins land heads about half of the time.

· Ratio of abstract possibilities: Heads is one of the two equally likely abstract possibilities.

· Measure of actual confidence: We have the same confidence in the toss being heads as we have in it not being heads.

· Measure of rational confidence: It’s rational to have the same confidence in the toss being heads as in it not being heads.

We used a ratio of observed frequencies to calculate the probability of finding water at Rocky Gap Shelter. And we used a ratio of abstract possibilities to calculate the probability of being dealt two aces. But sometimes these ratio approaches can’t give numerical probabilities. Neither ratio approach gives a numerical probability to “Michigan will run” relative to information about ancient Greek philosophy or relative to this combination:

· Michigan has first down and runs 70 percent of the time on first down.

· Michigan is behind and passes 70 percent of the time when it’s behind.1

1 Here it would be helpful to know what Michigan does on first down when they’re behind. But the same problem continues if other factors are relevant (e.g., how much time is left in the game).

Only in special cases do the ratio approaches give numerical probabilities.

The measure of actual confidence sometimes yields numerical probabilities. Consider these statements:

· “There’s life on other galaxies.”

· “Michigan will beat Ohio State this year.”

· “There’s a God.”

If you regard 1-to-1 betting odds on one of these as fair, then your actual confidence in the statement is 50 percent. But you may be unwilling to commit yourself to such odds. Maybe you can’t say if your confidence in the statement is less or greater than your confidence that a coin toss will be heads. Then we can’t assign numbers to your actual confidence. The rational confidence view, too, would have trouble assigning numerical probabilities in these cases.

Some doubt that probability as rational confidence satisfies the standard probability rules of the last section. These rules say that necessary statements always are 100 percent probable. But consider a complex propositional logic formula 0084 that’s a necessary truth, even though your evidence suggests that it isn’t; perhaps your normally reliable logic teacher tells you that it’s not a necessary truth – or perhaps in error you get a truth-table line of false (see §6.5). Relative to your data, it seems rational not to put 100 percent confidence in the formula, even though it in fact is a necessary truth. So is probability theory wrong?

Probability theory is idealized rather than wrong. It describes the confidence an ideal reasoner would have; an ideal reasoner would always recognize necessary truths and put 100 percent confidence in them. So we have to be careful applying probability theory to the beliefs of non-ideal human beings; we must be like physicists who give simple equations for frictionless bodies and then keep in mind that these are idealized when applying the equations to real cases.

Probability as actual confidence definitely can violate the probability rules. Many would calculate the probability of drawing 2 aces from a 52 or 104 card deck as 1/169 (1/13 • 1/13); so they’d regard 168-to-1 betting odds as fair. But the probability rules say this is wrong (§5.2).

· 3. How does probability relate to how ideally rational persons believe?

Some think ideally rational persons would believe all and only those statements that are more than 50 percent probable. But this has strange implications. Suppose that Austria, Brazil, and China each has a 33⅓ percent chance of winning the World Cup. Then each of these is 66⅔ percent probable:

· “Austria won’t win the World Cup, but Brazil or China will.”

· “Brazil won’t win the World Cup, but Austria or China will.”

· “China won’t win the World Cup, but Austria or Brazil will.”

On the view just described, ideally rational persons would believe all three statements. But this is silly; only a very confused person could do this.

The view has other problems. Why pick 50 percent? Why wouldn’t ideally rational persons believe all and only those statements that are at least 60 percent (or 75 percent or 90 percent) probable? And there are further problems if there’s no way to work out numerical probabilities.

The view gives an ideal of selecting all beliefs in a way that’s free of subjective factors (like feelings and practical interests). Some find this ideal attractive. Pragmatists find it repulsive. They believe in following subjective factors on issues that our intellects can’t decide. They think that numerical probability doesn’t apply to life’s deeper issues (like free will, God, or basic moral principles).

· 4. How does probability relate to how ideally rational persons act?

Some think ideally rational persons always act to maximize expected gain. In working out what to do, they’d list the possible alternative actions (A, B, C, …) and then consider the possible outcomes (A1, A2, A3, …) of each action. The gain or loss of each outcome 0085 would be multiplied by the probability of that outcome occurring; adding these together gives the action’s expected gain. So an action’s expected gain is the sum of probability-times-gain of its various possible outcomes. Ideally rational persons, on this view, would always do whatever had the highest expected gain (or the lowest expected loss when all alternatives lose).

What is “gain” here? Is it pleasure or desire-satisfaction, for oneself or one’s group or all affected by the action? Or is it financial gain, for oneself or one’s company? Consider an economic version of the theory, that ideally rational gamblers would always act to maximize their expected financial gain. Imagine that you’re such an “ideally rational gambler.” You find a game of dice that pays $3,536 on a $100 bet if you throw 12. You’d work out the expected gain of playing or not playing (alternatives P and N) in this way:

P. PLAYING. There are two possible outcomes: P1 (I win) and P2 (I lose). P1 is 1/36 likely and gains $3,536; P1 is worth (1/36 • $3,536) or $98.22. P2 is 35/36 likely and loses $100; P2 is worth (35/36 • −$100), or −$97.22. The expected gain of alternative P is ($98.22 − 97.22), or $1.

N. NOT PLAYING. On this alternative, I won’t win or lose anything. The expected gain of alternative N is (100 percent • $0), or $0.

So then you’d play. If you played this dice game only once, you’d be 97 percent likely to lose money. But the occasional payoff is great; you’d likely gain about a million dollars if you played a million times.

“Ideally rational gamblers” would gamble if the payoff were favorable, but not otherwise. Since casinos take their cut, their payoff is lower; ideally rational gamblers wouldn’t gamble there. But people have interests other than money; for many, gambling is great fun, and they’re willing to pay for the fun.

Some whose only concern is money refuse to gamble even when the odds are in their favor. Their concern may be to have enough money. They may better satisfy this by being cautious; they don’t want to risk losing what they have for the sake of gaining more. Few people would endanger their total savings for the 1-in-900 chance of gaining a fortune 1000 times as great.

Another problem with the “maximize expected gain” policy is that it’s often difficult or impossible to give objective numerical probabilities and to multiply probability by gain. So this policy faces grave difficulties if taken as an overall guide to life. But it can sometimes be useful as a rough guide. At times it’s helpful to work out the expected gain of the various alternatives, perhaps guessing at the probabilities and gains involved.

I once had two alternatives in choosing a flight:

· Ticket A costs $250 and allows me to change my return date.

· Ticket B costs $200 and has a $125 charge if I change my return date.

Which ticket is a better deal for me? Intuitively, A is better if a change is very likely, while B is better if a change is very unlikely. But we can be more precise. 0086 Let x represent the probability of my changing the return. Then:

· Expected cost of A = $250.

· Expected cost of B = $200 + ($125 • x).

Algebra shows the expected costs are identical if x is 40 percent. So A is better if a change is more than 40 percent likely, while B is better if a change is less likely than that. Judging from past experiences, the probability of my changing the return date was less than 40 percent. Thus, ticket B minimized my expected cost. So I bought ticket B.

In some cases, however, it might be more rational to pick A. Maybe I have $250 but I don’t have the $325 that option B might cost me; so I’d be in great trouble if I had to change the return date. It might then be more rational to follow “better safe than sorry” and pick A.

5.3a Exercise: LogiCola P (G, D, & V)

Suppose you decide to believe all and only statements that are more probable than not. You’re tossing three coins; which of the next six statements would you believe?

Either the first coin will be heads, or all three will be tails.

You’d believe this, since it happens in 5 of 8 cases: HHH HHT HTH HTT THH THT TTH TTT

1. I’ll get three heads.

2. I’ll get at least one tail.

3. I’ll get two heads and one tail.

4. I’ll get either two heads and one tail, or else two tails and one head.

5. The first coin will be heads.

For problems 6 through 10, suppose you decide to do in all cases whatever would maximize your expected financial gain.

· 6. You’re deciding whether to keep your life savings in a bank (which pays a dependable 10 percent) or invest in Mushy Software. If you invest in Mushy, you have a 99 percent chance of losing everything and a 1 percent chance of making 120 times your investment this year. What should you do?

· 7. You’re deciding whether to get hospitalization insurance. There’s a 1 percent chance per year that you’ll have a $10,000 hospital visit (ignore other hospitalizations); the insurance would cover it all. What’s the most you’d agree to pay per year for this insurance?

· 8. You’re running a company that offers hospitalization insurance. There’s a 1 percent chance per year that a customer will have a $10,000 hospital visit (ignore other hospitalizations); the insurance would cover it all. What’s the least you could charge per year for this insurance to likely break even? 0087

· 9. You’re deciding whether to invest in Mushy Software or Enormity Incorporated. Mushy stock has a 30 percent probability of gaining 80 percent, and a 70 percent probability of losing 20 percent. Enormity stock has a 100 percent probability of gaining 11 percent. Which should you invest in?

· 10. You’re deciding whether to buy a computer from Cut-Rate or Enormity. Both models perform identically. There’s a 60 percent probability that either machine will need repair over the period you’ll keep it. The Cut-Rate model is $600 but will be a total loss (requiring the purchase of another computer for $600) if it ever needs repair. The Enormity Incorporated model is $900 but offers free repairs. Which should you buy?

5.4 Reasoning from a sample

Recall our statistical syllogism about the Appalachian Trail:

· 90 percent of the AT shelters have water.

· Rocky Gap is an AT shelter.

· That’s all we know about the matter.

· ∴ Probably Rocky Gap has water.

Premise 1 says 90 percent of the shelters have water. I might know this because I’ve checked all 300 shelters and found that 270 of them had water. More likely, I base my claim on inductive reasoning. On my AT hikes (and I’ve hiked the whole Georgia-to-Maine distance), I’ve observed a large and varied group of shelters, and about 90 percent have had water. I conclude that probably roughly 90 percent of all the shelters (including those not observed) have water:

Sample-projection syllogism

· N percent of examined A’s are B’s.

· A large and varied group of A’s has been examined.

· ∴ Probably roughly N percent of all A’s are B’s.

· 90 percent of examined AT shelters have water.

· A large and varied group of AT shelters has been examined.

· ∴ Probably roughly 90 percent of all AT shelters have water.

Such reasoning assumes that a large and varied sample probably gives us a good idea of the whole. The strength of such reasoning depends on: (1) size of sample; (2) variety of sample; and (3) cautiousness of conclusion.

1. Other things being equal, a larger sample gives a stronger argument. A projection based on a small sample (ten shelters, for example) would be weak. My sample included about 150 shelters.

2. Other things being equal, a more varied sample gives a stronger argument. A sample is varied to the extent that it proportionally represents the diversity of the whole. AT shelters differ. Some are on high ridges, others are in valleys. 0088 Some are on the main trail, others are on blue-blazed side trails. Some are in wilderness areas, others are in rural areas. Our sample is varied to the extent that it reflects this diversity.

We’d have a weak argument if we examined only the dozen or so shelters in Georgia. This sample is small, has little variety, and covers only one part of the trail; but the poor sample might be all we have. Background information can help us to criticize a sample. Suppose we checked only AT shelters located on mountain tops or ridges. If we knew that water tends to be scarcer in such places, we’d judge this sample to be biased.

3. Other things being equal, we get a stronger argument if we have a more cautious conclusion. We have stronger reason for thinking the proportion of shelters with water is “between 80 and 95 percent” than for thinking that it’s “between 89 and 91 percent.” Our original argument says “roughly 90 percent.” This is vague; whether it’s too vague depends on our purposes.

Suppose our sample-projection argument is strong and has premises all true. Then it’s likely that roughly 90 percent of the shelters have water. But the conclusion is only a rational guess; it could be far off. It’s may even happen that every shelter that we didn’t check is dry. Inductive reasoning brings risk.

Here’s another sample-projection argument:

· 52 percent of the voters we checked favor the Democrat.

· A large and varied group of voters has been checked.

· ∴ Probably roughly 52 percent of all voters favor the Democrat.

Again, our argument is stronger if we have a larger and more varied sample and a more cautious conclusion. A sample of 500 to 1000 people supposedly yields a margin of likely error of less than 5 percent; we should then construe our conclusion as “Probably between 57 percent and 47 percent of all voters favor the Democrat.” To get a varied sample, we might select people using a random process that gives everyone an equal chance of being included. We also might try to have our sample proportionally represent groups (like farmers and the elderly) that tend to vote in a similar way. We should word our survey fairly and not intimidate people into giving a certain answer. And we should be clear whether we’re checking registered voters or probable voters. Doing a good pre-election survey isn’t easy.

A sample-projection argument ends the way a statistical syllogism begins – with “N percent of all A’s are B’s.” It’s natural to connect the two:

· 90 percent of examined AT shelters have water.

· A large and varied group of AT shelters has been examined.

· ∴ Probably roughly 90 percent of all AT shelters have water.

· Rocky Gap is an AT shelter.

· That’s all we know about the matter.

· ∴ It’s roughly 90 percent probable that Rocky Gap has water. 0089

A sample-projection argument could use “all” instead of a percentage:

· All examined cats purr.

· A large and varied group of cats has been examined.

· ∴ Probably all cats purr.

This conclusion makes a strong claim, since a single non-purring cat would make it false; this makes the argument riskier and weaker. We could expand the argument further to draw a conclusion about a specific cat:

· All examined cats purr.

· A large and varied group of cats has been examined.

· ∴ Probably all cats purr.

· Socracat is a cat.

· ∴ Probably Socracat purrs.

Thus sample-projection syllogisms can have various forms.

5.4a Exercise

Evaluate the following inductive arguments.

After contacting 2 million voters on telephone, we conclude that Landon will beat Roosevelt in 1936 by a landslide for the US presidency. (This was an actual prediction.)

The sample was biased. Those who could afford telephones during the Depression tended to be richer and more Republican. Roosevelt won easily.

1. I randomly examined 200 Loyola University Chicago students at the law school and found that 15 percent were born in Chicago. So probably 15 percent of all Loyola students were born in Illinois.

2. I examined every Loyola student whose Social Security number ended in 3 and I found that exactly 78.4 percent of them were born in Chicago. So probably 78.4 percent of all Loyola students were born in Chicago.

3. Italians are generally fat and lazy. How do I know? Well, when I visited Rome for a weekend last year, all the hotel employees were fat and lazy – all six of them.

4. I meet many people in my daily activities; the great majority of them intend to vote for the Democrat. So the Democrat probably will win.

5. The sun has risen every day as long as humans can remember. So the sun will likely rise tomorrow. (How can we put this into standard form?)

Consider this inductive argument: “Lucy got an A on the first four logic quizzes, so probably she’ll also get an A on the fifth logic quiz.” Would each of the statements 6 through 10 strengthen or weaken this argument?

· 6. Lucy has been sick for the last few weeks and has missed most of her classes.

· 7. The first four quizzes were on formal logic, while the fifth is on informal logic. 0090

· 8. Lucy has never received less than an A in her life.

· 9. A student in this course gets to drop the lowest of the five quizzes.

· 10. Lucy just took her Law School Admissions Test.

We’ll later see a deductive version of the classic argument from design for the existence of God (§7.1b #4). The following inductive version has a sample-projection form and is very controversial. Evaluate the truth of the premises and the general inductive strength of the argument.

· 11 The universe is orderly (like a watch that follows complex laws).

Most orderly things we’ve examined have intelligent designers.

We’ve examined a large and varied group of orderly things.

That’s all we know about the matter.

∴ The universe probably has an intelligent designer.

5.5 Analogical reasoning

Suppose you’re exploring your first Las Vegas casino. The casino is huge and filled with people. There are slot machines for nickels, dimes, quarters, and dollars. There are tables for blackjack and poker. There’s a big roulette wheel. There’s a bar and an inexpensive all-you-can-eat buffet.

You then go into your second Las Vegas casino and notice many of the same things: the size of the casino, the crowd, the slot machines, the blackjack and poker tables, the roulette wheel, and the bar. You’re hungry. Recalling what you saw in your first casino, you conclude, “I bet this place has an inexpensive all-you-can-eat buffet, just like the first casino.”

This is an argument by analogy. The first and second casinos are alike in many ways, so they’re probably alike in some further way:

· Most things true of casino 1 also are true of casino 2.

· Casino 1 has an all-you-can-eat buffet.

· That’s all we know about the matter.

· ∴ Probably casino 2 also has an all-you-can-eat buffet.

Here’s a more wholesome example (about Appalachian Trail shelters):

· Most things true of the first AT shelter are true of this second one.

· The first AT shelter had a logbook for visitors.

· That’s all we know about the matter.

· ∴ Probably this second shelter also has a logbook for visitors.

We argue that things similar in many ways are likely similar in a further way.

Statistical and analogical arguments are closely related: 0091

Statistical

· Most large casinos have buffets.

· Circus Circus is a large casino.

· That’s all we know about the matter.

· ∴ Probably Circus Circus has a buffet.

Analogical

· Most things true of casino 1 are true of casino 2.

· Casino 1 has a buffet.

· That’s all we know about the matter.

· ∴ Probably casino 2 has a buffet.

The first rests on our experience of many casinos, while the second rests on our experience of many features that two casinos have in common.

Here’s the general form of the analogy syllogism:

Analogy syllogism

· Most things true of X also are true of Y.

· X is A.

· That’s all we know about the matter.

· ∴ Probably Y is A.

Premise 1 is rough. In practice, we don’t just count similarities; rather we look for how relevant the similarities are to the conclusion. While the two casinos were alike in many ways, they also differed in some ways:

· Casino 1 has a name whose first letter is “S,” while casino 2 doesn’t.

· Casino 1 has a name whose second letter is “A,” while casino 2 doesn’t.

· Casino 1 has quarter slot machines by the front entrance, while casino 2 has dollar slots there.

These factors aren’t relevant and so don’t weaken our argument that casino 2 has a buffet. But the following differences would weaken the argument:

· Casino 1 is huge, while casino 2 is small.

· Casino 1 has a bar, while casino 2 doesn’t.

· Casino 1 has a big sign advertising a buffet, while casino 2 has no such sign.

These factors would make a buffet in casino 2 less likely.

So we don’t just count similarities when we argue by analogy; many similarities are trivial and unimportant. Rather, we look to relevant similarities. But how do we decide which similarities are relevant? We somehow appeal to our background information about what things are likely to go together. It’s difficult to give rules here – even vague ones.

Our “Analogy Syllogism” formulation is a rough sketch of a subtle form of reasoning. Analogical reasoning is elusive and difficult to put into strict rules. 0092

5.5a Exercise: LogiCola P (I)

Suppose you’re familiar with this Gensler logic book but with no others. Your friend Sarah is taking logic and uses another book. You think to yourself, “My book discusses analogical reasoning, and so Sarah’s book likely does too.” Which of these bits of information would strengthen or weaken this argument – and why?

Sarah’s course is a specialized graduate course on quantified modal logic.

This weakens the argument; such a course probably wouldn’t discuss analogical reasoning. (This answer presumes background information.)

1. Sarah’s book has a different color.

2. Sarah’s book also has chapters on syllogisms, propositional logic, quantificational logic, and meaning and definitions.

3. Sarah’s course is taught by a member of the math department.

4. Sarah’s chapter on syllogisms doesn’t use the star test.

5. Sarah’s book is abstract and has few real-life examples.

6. Sarah’s book isn’t published by Routledge.

7. Sarah’s book is entirely on informal logic.

8. Sarah’s book has cartoons.

9. Sarah’s book has 100 pages on inductive reasoning.

10.Sarah’s book has 10 pages on inductive reasoning.

Suppose your friend Tony at another school took an ethics course that discussed utilitarianism. You’re taking an ethics course next semester. You think to yourself, “Tony’s course discussed utilitarianism, and so my course likely will too.” Which of these bits of information would strengthen or weaken this argument – and why?

· 11. Tony’s teacher transferred to your school and will teach your course as well.

· 12. Tony’s course was in medical ethics, while yours is in general ethical theory.

· 13. Both courses use the same textbook.

· 14. Tony’s teacher has a good reputation, while yours doesn’t.

· 15. Your teacher is a Marxist, while Tony’s isn’t.

5.6 Analogy and other minds

We’ll now study a classic philosophical example of analogical reasoning. This will help us to appreciate the elusive nature of such arguments.

Consider these two hypotheses:

· There are other conscious beings (with thoughts and feelings) besides me.

· I’m the only conscious being. Other humans are like cleverly constructed robots; they have outer behavior but no inner thoughts and feelings.

We all accept the first hypothesis and reject the second. How can we justify this 0093 intellectually? Consider that I can directly feel my own pain, but not the pain of others. When I experience the pain behavior of others, how do I know that this behavior manifests an inner experience of pain?

One approach appeals to an argument from analogy:

· Most things true of me also are true of Jones. (We’re both alike in general behavior, nervous system, and so on.)

· I generally feel pain when showing outward pain behavior.

· This is all I know about the matter.

· ∴ Probably Jones also feels pain when showing outward pain behavior.

Since Jones and I are alike in most respects, we’re probably alike in a further respect – that we both feel pain when we show pain behavior. But then there’d be other conscious beings besides me.

Here are four ways to criticize this argument:

· Jones and I also differ in many ways. This may weaken the argument.

· Since I can’t directly feel Jones’s pain, I can’t have direct access to the truth of the conclusion. This makes the argument peculiar and may weaken it.

· I have a sample-projection argument against there being other conscious beings: “All the conscious experiences that I’ve experienced are mine; but I’ve examined a large and varied group of conscious experiences. And so probably all conscious experiences are mine (but then I’m the only conscious being).”

· Since the analogical argument is weakened by such considerations, it at most makes it only somewhat probable that there are other conscious beings. But normally we take this belief to be solidly based.

Suppose we reject the analogical argument. Then why should we believe in other minds? Because it’s an instinctive, commonsense belief that hasn’t been disproved and that’s in our practical and emotional interests to accept? Or because of a special rule of evidence, not based on analogy, that experiencing another’s behavior justifies beliefs about their mental states? Or because talk about mental states is really just talk about behavior (so “being in pain” means “showing pain behavior”)? Or maybe there’s no answer – and I don’t really know if there are other conscious beings besides me.

The analogical argument for other minds highlights problems with induction. Philosophers seldom dispute whether deductive arguments have a correct connection between premises and conclusion; instead, they dispute the truth of the premises. But with inductive arguments it’s often disputed whether and to what extent the premises, if true, provide good reason for accepting the conclusion. Those who like things neat and tidy prefer deductive to inductive reasoning. 0094

5.7 Mill’s methods

John Stuart Mill, a 19th-century British philosopher, formulated five methods for arriving at and justifying beliefs about causes. We’ll study three of these. His basic idea is that factors that regularly occur together may be causally related.

Suppose that Alice, Bob, Carol, and David were at a party. Alice and David got sick, and food poisoning is suspected. Hamburgers, pie, and ice cream were served. This chart shows who ate what and who got sick (where Hm = Hamburger, Pi = Pie, IC = Ice Cream, Sk = Sick, y = yes, and n = no):

Hm

Pi

IC

Sk

Alice

y

y

n

y

Bob

n

n

y

n

Carol

y

n

n

n

David

n

y

y

y

To find what caused the sickness, we’d search for a factor that correlates with the “yes” answers in the “sick” column. This suggests that the pie did it. Pie is the only thing eaten by all and only those who got sick. This reasoning reflects Mill’s method of agreement:

Agreement

· A occurred more than once.

· B is the only additional factor that occurred if and only if A occurred.

· ∴ Probably B caused A, or A caused B.

· Sickness occurred more than once.

· Eating pie is the only additional factor that occurred if and only if sickness occurred.

· ∴ Probably eating pie caused sickness, or sickness caused the eating of pie.

The second alternative, that sickness caused the eating of pie (perhaps by bringing about a special craving?), is interesting but implausible. So we’d conclude that the people probably got sick because of eating the pie.

The “probably” is important. Eating the pie and getting sick might just happen to have occurred together; maybe there’s no causal connection. Some unmentioned factor (maybe drinking bad water while hiking) might have caused the sicknesses. Or maybe the two sicknesses had different causes.

We took for granted a simplifying assumption. We assumed that the two cases of sickness had the same cause which was a single factor on our list and always caused sickness. Our investigation may force us to give up this assumption and consider more complex solutions. But it’s good to try simple solutions first and avoid complex ones as long as we can.

We can definitely conclude that eating the hamburgers doesn’t necessarily make a person sick, since Carol ate them but didn’t get sick. Similarly, eating the ice cream doesn’t necessarily make a person sick, since Bob ate it but didn’t get sick. Let’s call this sort of reasoning the “method of disagreement”: 0095

Disagreement

· A occurred in some case.

· B didn’t occur in the same case.

· ∴ A doesn’t necessarily cause B.

· Eating the ice cream occurred in Bob’s case.

· Sickness didn’t occur in Bob’s case.

· ∴ Eating the ice cream doesn’t necessarily cause sickness.

Mill used this form of reasoning but didn’t include it in his five methods.

Suppose two factors – eating pie and eating hamburgers – occurred in just those cases where someone got sick. Then the method of agreement wouldn’t lead to any definite conclusion about which caused the sickness. To make sure it was the pie, we might do an experiment. We take two people, Eduardo and Frank, who are as alike as possible in health and diet. We give them all the same things to eat, except that we feed pie to Eduardo but not to Frank. (This is unethical, but it makes a good example.) Then we see what happens. Suppose Eduardo gets sick but Frank doesn’t; then we can conclude that the pie probably caused the sickness. This follows Mill’s method of difference:

Difference

A occurred in the first case but not the second.

The cases are otherwise identical, except that B also occurred in the first case but not in the second.

∴ Probably B is (or is part of) the cause of A, or A is (or is part of) the cause of B.

Sickness occurred in Eduardo’s case but not Frank’s.

The cases are otherwise identical, except that eating pie occurred in Eduardo’s case but not Frank’s.

∴ Probably eating pie is (or is part of) the cause of the sickness, or the sickness is (or is part of) the cause of eating pie.

Since we made Eduardo eat the pie, we reject the second main alternative. So probably eating pie is (or is part of) the cause of the sickness. The cause might simply be the eating of the pie (which contained a virus). Or the cause might be this combined with one’s poor physical condition.

Another unethical experiment illustrates Mill’s method of variation. This time we find four victims (George, Henry, Isabel, and Jodi) and feed them varying amounts of pie (where 1 = tiny slice, 2 = small slice, 3 = normal slice, 4 = two slices). They get sick in varying degrees (where 1 = slightly sick, 2 = somewhat sick, 3 = very sick, 4 = wants to die):

Pie

Sick>

George

1

1

Henry

2

2

Isabel

3

3

Jodi

4

4

So the pie probably caused the sickness, following Mill’s method of variation: 0096

Variation

A changes in a certain way if and only if B also changes in a certain way.

∴ Probably B’s changes caused A’s, or A’s caused B’s, or some C caused both.

People got sicker if and only if they ate more pie.

∴ Probably eating pie caused the sickness, or the sickness caused the eating of pie, or something else caused both the eating and the sickness.

The last two alternatives are implausible. So we conclude that eating the pie probably caused the sickness.

Mill’s methods often give us a conclusion with several alternatives. Temporal sequence can eliminate an alternative; the cause can’t come after the effect. Suppose we conclude this: “Either laziness during previous months caused the F on the final exam, or the F on the final exam caused laziness during the previous months.” Here we’d reject the second alternative.

“Cause” can mean either “total cause” or “partial cause.” Suppose Jones got shot and then died. Misapplying the method of disagreement, we might conclude that being shot didn’t cause the death, since some who are shot don’t die. But the proper conclusion is rather that being shot doesn’t necessarily cause death. We also can conclude that being shot wasn’t the total cause of Jones’s death (even though it might be a partial cause). What caused Jones’s death wasn’t just that he was shot. What caused the death was that he was shot in a certain way in certain circumstances (for example, through the head with no medical help). This is the total cause; anyone shot that exact way in those exact circumstances (including the same physical and mental condition) would have died. The method of disagreement deals with total causes, not partial causes.

The ambiguities of the word “cause” run deep. “Factor A causes factor B” could mean any combination of these:

Factor A will always (or probably) by itself (or in combination with factor C) directly (or through a further causal chain) bring about factor B; or the absence of factor A will … bring about the absence of factor B; or both.

The probabilistic sense is controversial. Suppose that the incidence of lung cancer varies closely with heavy smoking, so heavy smokers are much more likely to get lung cancer. Could this probabilistic connection be enough for us to say that heavy smoking is a (partial) cause of lung cancer? Or is it wrong to use “cause” unless we have some factor C such that heavy smoking when combined with factor C always results in lung cancer? Part of the debate over whether a “causal connection” exists between heavy smoking and lung cancer is semantic. Can we use “cause” with probabilistic connections? If we can speak of Russian roulette causing death, then we can speak of heavy smoking causing lung cancer. 0097

5.7a Exercise: LogiCola P (M & B)

Draw whatever conclusions you can using Mill’s methods; supplement Mill’s methods by common sense when appropriate. Say which method you’re using, what alternatives you conclude from the method itself, and how you narrow the conclusion down to a single alternative. Also say when Mill’s methods lead to no definite conclusion.

Kristen’s computer gave error messages when she booted up. We changed things one at a time to see what would stop the messages. What worked was updating the video driver.

By the difference method, probably updating the driver caused (or partially caused) the error messages to stop, or stopping the messages caused (or partially caused) us to update the driver. The latter can’t be, since the cause can’t happen after the effect. So probably updating the driver caused (or partially caused) the error messages to stop.

1. Experiments show that a person’s reaction time is much longer after a few drinks but is relatively uninfluenced by a series of other factors.

2. A study showed that people with no bacteria in their mouth get no cavities – and that people with no food particles in their mouth get no cavities. However, people with both bacteria and food particles in their mouth get cavities.

3. Whenever Michelle drinks scotch and soda, she has a hangover the next day. Whenever she drinks gin and soda, she gets a hangover. Likewise, whenever she drinks rum and soda, she gets a hangover.

4. The morning disc jockey on a radio station remarked in early December that the coldest temperature of the day seemed to occur later and later in the morning. The weather person pointed out that the sunrise had been getting later and later. In a few weeks both processes would reverse themselves, with the sunrise and the coldest temperature of the day both occurring earlier every day.

5. Our research team at the medical center just discovered a new blood factor called “factor K.” Factor K occurs in everyone who has cancer but in no one else.

6. When I sat eating on the rock slab in Grand Gulch, armies of little ants invaded the slab. Later I sat on the slab the same way except that I didn’t eat anything. In the second case the ants didn’t invade the slab.

7. We just did an interesting study comparing the vacation periods of employees and the disappearance of food items. We found that when Megan is working, the items disappear, and when she’s away, they don’t disappear.

8. People in several parts of the country have lower rates of tooth decay. Investigations show that the only thing different about these places is that their water supply contains fluoride.

9. We did an experiment where we selected two more or less identical groups and put fluoride in the first group’s water but not in the second group’s. The first group had a lower rate of tooth decay.

10.Many backpackers think eating raw garlic gives you an odor that causes mosquitoes not to bite you. When hiking a mosquito-infested part of the Bruce Trail, I ate much raw garlic. The mosquitoes bit me in their usual bloodthirsty manner. 0098

11.Little Will throws food on the floor and receives signs of disapproval from Mommy and Daddy. Such things happen regularly. When he eats his food without throwing it on the floor, he doesn’t get any disapproval.

12.Everyone in our study who became a heroin addict had first tried marijuana.

13.If you rub two surfaces together, the surfaces get warm. They’ll get warmer and warmer as you rub the surfaces together harder and faster.

14.When we plot how many hours Alex studies against the grades he gets for his various exams, we see a close correlation.

15.Matches that aren’t either heated or struck don’t light. Matches that are wet don’t light. Matches that aren’t in the presence of oxygen don’t light. Matches that are heated or struck, dry, and in the presence of oxygen do light.

16.Little Will made a discovery. He keeps moving the lever on the radio up and down. He notices that the music gets louder and softer when he does this.

17.We made a careful study of the heart rate of athletes and how it correlates with various factors. The only significant correlation we found is that those who do aerobic exercise (and those alone) have lower heart rates.

18.We investigated many objects with a crystalline structure. The only thing they have in common is that all solidified from a liquid state. (Mill used this example.)

19.After long investigation, we found a close correlation between night and day. If you have night, then there invariably, in a few hours, follows day. If you have day, then invariably, in a few hours, there follows night.

20.Young Will has been experimenting with his electrical meter. He found that if he increases the electrical voltage, then he also increases the current.

21.Whenever Kurt wears his headband, he makes all his field goals. Whenever he doesn’t wear it, he misses them all. This has been going on for many years.

22.The fish in my father’s tank all died. We suspected either the fish food or the water temperature. We bought more fish and did everything the same except for changing the fish food. All the fish died. We then bought more fish and did everything the same except for changing the water temperature. The fish lived.

23.Bacteria introduced by visitors from the planet Krypton are causing an epidemic. We’ve found that everyone exposed to the bacteria gets sick and dies – except those who have a higher-than-normal heart rate.

24.When we chart the inflation rate next to the growth in the national debt over several years, we find a close correlation.

25.On my first backpack trip, I hiked long distances but wore only a single pair of socks. I got bad blisters on my feet. On my second trip, I did everything the same except that I wore two pairs of socks. I got only minor blisters.

5.8 Scientific laws

Ohm’s Law is about electricity. “Law” here suggests great scientific dignity and backing. Ohm’s Law is more than a mere hypothesis (preliminary conjecture) or even a theory (with more backing than a hypothesis but less than a law). 0099

Ohm’s Law is a formula relating electric current, voltage, and resistance; here I = current (in amps), E = voltage (in volts), and R = resistance (in ohms):

Ohm’s Law: I = E/R

An electric current of 1 amp (ampere) is a flow of 6,250,000,000,000,000,000 electrons per second; a 100-watt bulb draws almost an amp, and the fuse may blow if you draw over 15 amps. Voltage pushes the electrons; your outlet may have 117 volts and your flashlight battery 1.5 volts. The voltage encounters an electrical resistance, which restricts the electron flow. A short wire has low resistance (less than an ohm) while an inch of air has high resistance (billions of ohms). Small carbon resistors go from less than an ohm to millions of ohms. Ohm’s Law says that current increases if you raise voltage or lower resistance.

Electric current is like the flow of water through a garden hose. Voltage is like water pressure. Electrical resistance is like the hose’s resistance to the water flow; a long, thin hose has greater resistance than a short, thick one. The current or flow of water is measured in gallons per minute; it increases if you raise the water pressure or use a hose with less resistance.

Ohm’s Law is a mathematical formula that lets us calculate various results. Suppose we put a 10-ohm resistor across your 117-volt electrical outlet; we’d get a current of 11.7 amps (not quite enough to blow your fuse):

I = E/R = 117 volts/10 ohms = 11.7 amps.

Ohm’s Law deals with unobservable properties (current, voltage, resistance) and entities (electrons). Science allows unobservables if they have testable consequences or can somehow be measured. The term “unobservable” is vague. Actually we can feel certain voltages. The 1.5 volts from your flashlight battery can’t normally be felt, slightly higher voltages give a slight tingle, and the 117 volts from your outlet can give a dangerous jolt. Philosophers dispute the status of unobservable entities. Are the ultimate elements of reality unobservables like atoms and electrons, or commonsense objects like chairs, or both? Or are atoms and chairs both just fictions to help us talk about our sensations?

We can ask how scientific laws are discovered, or we can ask how they’re verified. History can tell us how Georg Simon Ohm discovered his law in 1827; philosophy deals more with how such laws are verified (shown to be true). Roughly, scientific laws are verified by a combination of observation and argument; but the details get complicated.

Suppose we want to verify Ohm’s Law. We’re given batteries, resistors, and a meter for measuring current, voltage, and resistance. The meter simplifies our task; we don’t have to define the fundamental units (ampere, volt, and ohm) or invent ways to measure them. Wouldn’t the meter make our task too easy? Couldn’t we just do a few experiments and then prove Ohm’s Law, using standard deductive and inductive reasoning? Unfortunately, it’s not that simple. 0100

Suppose we hook up batteries of different voltages to a resistor:

fig0014

The voltmeter measures voltage, and the ammeter measures current. We start with a 10-ohm resistor. We try voltages of 1 volt and 2 volts and observe currents of .1 amp and .2 amp. Here’s a chart with the results (here the horizontal x-axis is for voltage, from 0 to 3 volts, and the vertical y-axis is for current, from 0 to .5 amps):

fig0015

If E = 1 volt and R = 10 ohms, then I = E/R = 1/10 = .1 amp.

If E = 2 volts and R = 10 ohms, then I = E/R = 2/10 = .2 amp.

Our observations accord with Ohm (I = E/R). So we argue inductively:

· All examined voltage–resistance–current cases follow Ohm.

· A large and varied group of such cases has been examined.

· ∴ Probably all such cases follow Ohm.

Premise 2 is weak, since we tried only two cases. But we can easily perform more experiments; after we do so, Ohm would seem to be securely based.

The problem is that we can give an inductive argument for a second and incompatible hypothesis: “I = (E2 – 2E + 2)/R.” Let’s call this Mho’s Law. Surprisingly, our test results also accord with Mho. In the first case (one volt), I = .1 amp [since (12 – 2 • 1 + 2)/10 = (1 – 2 + 2)/10 = 1/10 = .1]; in the second case (two volts), I = .2 amp [since (22 – 2 • 2 + 2)/10 = (4 – 4 + 2)/10 = 2/10 = .2]. So each examined case follows Mho. We can argue inductively as follows:

· All examined voltage–resistance–current cases follow Mho.

· A large and varied group of such cases has been examined.

· ∴ Probably all such cases follow Mho.

This inductive argument for Mho seems as strong as the one we gave for Ohm. Judging just from these arguments and test results, there seems to be no reason for preferring Ohm over Mho, or Mho over Ohm. 0101

The two laws, while agreeing on both test cases so far, give conflicting predictions for further cases. Ohm says we’ll get 0 amps with 0 volts, and .3 amp with 3 volts; Mho says we’ll get .2 amp with 0 volts, and .5 amp with 3 volts:

fig0016

Ohm’s predictions

fig0017

Mho’s predictions

The two laws are genuinely different, even though both give the same results for a voltage of 1 or 2 volts.

We have to try a crucial experiment to decide between the theories. What happens with 3 volts? Ohm says we’ll get .3 amp, but Mho says we’ll get .5 amp. If we do the experiment and get .3 amp, this would falsify Mho:

· If Mho is correct and we apply 3 volts to this 10-ohm resistor, then we get .5 amp current.

· We apply 3 volts to this 10-ohm resistor.

· We don’t get .5 amp current.

· ∴ Mho isn’t correct.

· If M and A, then G Valid

· A

· Not-G

· ∴ Not-M

Premise 1 links a scientific hypothesis (Mho) to antecedent conditions (that 3 volts have been applied to the 10-ohm resistor) to give a testable prediction (that we’ll get .5 amp current). Premise 2 says the antecedent conditions have been fulfilled. But premise 3 says the results conflict with what was predicted. Since this argument has true premises and is deductively valid, our experiment shows Mho to be wrong.

Does our experiment similarly show that Ohm is correct? Unfortunately not. Consider this argument:

· If Ohm is correct and we apply 3 volts to this 10-ohm resistor, then we get .3 amp current.

· We apply 3 volts to this 10-ohm resistor.

· We get .3 amp current.

· ∴ Ohm is correct.

· If O and A, then G Invalid

· A

· G

· ∴ O

This is invalid, as we could check using propositional logic (Chapter 6). So the premises don’t prove that Ohm is correct; and Ohm might fail for further cases. But the experiment strengthens our inductive argument for Ohm, since it gives a larger and more varied sample. So we can have greater trust that the pattern observed to hold so far will continue to hold.

Here are three aspects of scientific method: 0102

· Scientists often set up crucial experiments to decide between conflicting theories. Scientists dream up alternative theories and look for ways to decide between them.

· We can sometimes deductively refute a theory through a crucial experiment. Experimental results, when combined with other suitable premises, can logically entail that a theory is false.

· We can’t deductively prove a theory using experiments. Experiments can inductively support a theory and deductively refute opposing theories. But they can’t eliminate the possibility of the theory’s failing for further cases.

Recall how the Mho problem arose. We had two test cases that agreed with Ohm. These test cases also agreed with another formula, one we called “Mho”; and the inductive argument for Mho seemed as strong as the one for Ohm. But Ohm and Mho gave conflicting predictions for further test cases. So we did a crucial experiment to decide between the two. Ohm won.

There’s always another Mho behind the bush – so our problems aren’t over. However many experiments we do, there are always alternative theories that agree with all test cases so far but disagree on some further predictions. In fact, there’s always an infinity of theories that do this. No matter how many dots we put on the chart (representing test results), we could draw an unlimited number of lines that go through all these dots but otherwise diverge.

Suppose we conduct 1000 experiments in which Ohm works. There are alternative theories Pho, Qho, Rho, and so on that agree on these 1000 test cases but give conflicting predictions about further cases. And each theory seems to be equally supported by the same kind of inductive argument:

· All examined voltage–resistance–current cases follow this theory.

· A large and varied group of such cases has been examined.

· ∴ Probably all such cases follow this theory.

Even after 1000 experiments, Ohm is just one of infinitely many formulas that seem equally probable on the basis of the test results and inductive logic.

In practice, we prefer Ohm on the basis of simplicity. Ohm is the simplest formula that agrees with all our test results. So we prefer Ohm to the alternatives and see Ohm as firmly based.

What is simplicity and how can we decide which of two scientific theories is simpler? We don’t have neat and tidy answers to these questions. In practice, though, we can tell that Ohm is simpler than Mho; we judge that Ohm’s formula and straight line are simpler than Mho’s formula and curved line. We don’t have a clear and satisfying definition of “simplicity”; yet we can apply this notion in a rough way in many cases.

The simplicity criterion is a form of Ockham’s razor (§16.2): 0103

Simplicity criterion: Other things being equal, we ought to prefer a simpler theory to a more complex one.

The “other things being equal” qualification is important. Experiments may force us to accept very complex theories; but we shouldn’t accept such theories unless we have to.

It’s unclear how to justify the simplicity criterion. Since inductive reasoning stumbles unless we presuppose the criterion, an inductive justification would be circular. Perhaps the criterion is a self-evident truth not in need of justification. Or perhaps it’s pragmatically justified:

· If the simplicity criterion isn’t correct, then no scientific laws are justified.

· Some scientific laws are justified.

· ∴ The simplicity criterion is correct.

Does premise 2 beg the question against the skeptic? Can this premise be defended without appealing to the criterion? The simplicity criterion is vague and raises complex problems, but we can’t do without it.

Coherence is another factor that’s important for choosing between theories:

Coherence criterion: Other things being equal, we ought to prefer a theory that harmonizes with existing well-established beliefs.

Mho has trouble here, since it predicts that 0 volts across a 10-ohm resistor produces a .2 amp current. But then it follows, using an existing well-grounded belief that current through a resistor produces heat, that a 10-ohm resistor with no voltage applied produces heat. While nice for portable handwarmers, this would be difficult to harmonize with the conservation of energy. So the coherence criterion leads us to doubt Mho.

Do further tests continue to confirm Ohm? The answer is complicated. Some resistors give, not a straight-line chart, but a curve; this happens if we use an incandescent light bulb for the resistor. Instead of rejecting Ohm, scientists say that heating the resistor changes the resistance. This seems satisfactory, since the curve becomes straighter if the resistor is kept cooler. And we can measure changes in resistance when the resistor is heated externally.

Another problem is that resistors will burn up or explode if enough voltage is applied. This brings an irregularity into the straight-line chart. But again, scientists regard this as changing the resistance, and not as falsifying Ohm.

A more serious problem is that some devices don’t even roughly match the pattern predicted by Ohm. A Zener diode, for example, draws almost no current until a critical voltage is reached; then it draws a high current: 0104

fig0018

Zener diode curve

Do such devices refute Ohm? Not necessarily. Scientists implicitly qualify Ohm so it applies just to “pure resistances” and not to things like Zener diodes. This seems circular. Suppose that a “pure resistor” is any device that satisfies Ohm. Then isn’t it circular to say that Ohm holds for “pure resistors”? Doesn’t this just mean that Ohm works for any device for which it works?

In practice, people working in electronics quickly learn which devices satisfy Ohm and which don’t. The little tubular “resistors” follow Ohm closely (neglecting slight changes caused by heating and major changes when we burn up the resistor). Zener diodes, transistors, and other semiconductors generally don’t follow Ohm. So Ohm can be a useful principle, even though it’s difficult to specify in any precise and non-circular manner the cases where it applies.

5.8a Exercise

Sketch in a rough way how we might verify or falsify these hypotheses. Point out any special difficulties likely to arise.

Women have less innate logical ability than men.

We’d give a logic test to large and varied groups of either sex, and see how results differ. If women tested lower [they don’t – judging from a test I designed for a friend in psychology], this wouldn’t itself prove lower innateability, since the lower scores might come from different social expectations or upbringing. It would be difficult to avoid this problem completely; but we might try testing groups in cultures with less difference in social expectations and upbringing.

1. Neglecting air resistance, objects of any weight fall at the same speed.

2. Germs cause colds.

3. A huge Ice-Age glacier covered most of Wisconsin about 10,000 years ago.

4. Regular moderate use of marijuana is no more harmful than regular moderate use of alcohol.

5. When couples have several children, the child born first tends to have greater innate intelligence than the one born last.

6. Career-oriented women tend to have marriages that are more successful than those of home-oriented women.

7. Factor K causes cancer. 0105

8. Water is made up of molecules consisting of two atoms of hydrogen and one atom of oxygen.

9. Organisms of a given biological species randomly develop slightly different traits; organisms with survival-promoting traits tend to survive and pass these traits to their offspring. New biological species result when this process continues over millions of years. This is how complex species developed from simple organisms, and how humans developed from lower species.

10.Earth was created 5,000 years ago, complete with all current biological species.

5.9 Best-explanation reasoning

Suppose you made fudge for a party. When you later open the refrigerator, you find that most of the fudge is gone. You also find that your young son, who often steals deserts, has fudge on his face. The child denies that he ate the fudge. He contends that Martians appeared, ate the fudge, and spread some on his face. But you aren’t fooled. The better and more likely explanation is the child ate the fudge. So this is what you believe.

This is an inference to the best explanation. Intuitively, we should accept the best explanation for the data. Consider what we said about Ohm’s Law in the previous section. Ohm’s Law explains a wide range of phenomena about electrical voltage, current, and resistance. Besides having testable implications that accord well with our experience, the law also has other virtues, including clarity, simplicity, and coherence with existing well-established beliefs. Unless someone comes up with a better explanation of the data, we should accept Ohm’s Law.

Our best argument for the theory of evolution has a similar form:

· We ought to accept the best explanation for the wide range of empirical facts about biological organisms (including comparative structure, embryology, geographical distribution, and fossil records).

· The best explanation for the wide range of empirical facts about biological organisms is evolution.

· ∴ We ought to accept evolution.

A fuller formulation would elaborate on what these empirical facts are, alternative ways to explain them, and why evolution provides a better explanation than its rivals. Some think our core beliefs about most things, including the existence of material objects, other minds, and perhaps God, are to be justified as inferences to the best explanation.

Particularly interesting is the “fine-tuning” inference for the existence of God. Here the empirical data to be explained is that the basic physical constants that govern the universe (like the gravitational constant “g,” the charge and mass of the proton, the density of water, and the total mass of the universe) are within the very narrow range that makes it possible for life to evolve. Stephen Hawking 0106 gives this example: “If the rate of expansion one second after the Big Bang had been smaller by even one part in a hundred thousand million million, the universe would have recollapsed before it ever reached its present size”1 – which would have blocked the evolution of life. So life requires the expansion rate to be correct to the 17th decimal place; and other constants are similar. How is this empirical data to be explained? Could this precise combination of physical constants have come about by chance? Some atheists propose that there are an infinity of parallel universes, each governed by a different physics, and that it was highly likely that some of these parallel universes could produce life. But many theists claim that the simplest and best explanation involves God: that the universe was caused by a great mind who “fine tuned” its physical laws to make possible the emergence of life.

1 A Brief History of Time, tenth anniversary edition (New York: Bantam Books, 1998), page 126; he also gives other examples and discusses their theological implications. Anthony Flew (There Is a God (New York: HarperCollins, 2007), pp. 113–21) and Francis S. Collins (The Language of God (New York: Free Press, 2006), pp. 63–84) were prominent atheists who converted to theism due to the fine-tuning argument. Http://www.harryhiker.com/reason.pdf defends the argument and http://www.harryhiker.com/genesis.exe is a corresponding Windows computer game.

The general form of the inference to the best explanation raises some issues. On what grounds should we evaluate one explanation as “better” than another? Should we accept the best possible explanation (even though no one may yet have thought of it) or the best currently available explanation (even though none of the current explanations may be very good)? And why is the best explanation most likely to be the true one?

5.10 Problems with induction

We’ve seen that inductive logic isn’t as neat and tidy as deductive logic. Now we’ll consider two further perplexing problems: how to formulate principles of inductive logic and how to justify these principles.

We’ve formulated inductive principles in rough ways that if taken literally can lead to absurdities. For example, our statistical-syllogism formulation can lead to this absurd inference:

· 60 percent of all Chicago voters are Democrats.

· This non-Democrat is a Chicago voter.

· That’s all we know about the matter.

· ∴ It’s 60 percent probable that this non-Democrat is a Democrat.

Actually, “This non-Democrat is a Democrat” is 0 percent probable, since it’s a self-contradiction. So our statistical syllogism principle isn’t entirely correct.

We noted that the analogy syllogism is oversimplified in its formulation. We need to rely on relevant similarities instead of just counting resemblances. But 0107 “relevant similarities” is hard to pin down.

Sample-projection syllogisms suffer from a problem raised by Nelson Goodman. Consider this argument:

· All examined diamonds are hard.

· A large and varied group of diamonds has been examined.

· ∴ Probably all diamonds are hard.

Given that the premises are true, the argument would seem to be a good one. But consider this second argument, which has the same form except that we substitute a more complex phrase for “hard”:

· All examined diamonds are such that they are hard-if-and-only-if-they-were-examined-before-the-year-2222.

· A large and varied group of diamonds has been examined.

· ∴ Probably all diamonds are such that they are hard-if-and-only-if-they-were-examined-before-the-year-2222.

Premise 1 is tricky to understand. It’s not yet 2222. So if all examined diamonds are hard, then they are such that they are hard-if-and-only-if-they-were- examined-before-the-year-2222. So premise 1 is true. Premise 2 also is true. Then this second argument also would seem to be a good one.

Consider a diamond X that will first be examined after 2222. By our first argument, diamond X probably is hard; by the second, it probably isn’t hard. So our sample projection argument leads to conflicting conclusions.

Philosophers have discussed this problem for decades. Some suggest that we qualify the sample-projection syllogism form to outlaw the second argument; but it’s unclear how to eliminate the bad apples without also eliminating the good ones. As yet, there’s no agreement on how to solve the problem.

Goodman’s problem is somewhat like one we saw in the last section. Here we had similar inductive arguments for two incompatible laws: Ohm and Mho:

· All examined electrical cases follow Ohm’s Law.

· A large and varied group of cases has been examined.

· ∴ Probably all electrical cases follow Ohm’s Law.

· All examined electrical cases follow Mho’s Law.

· A large and varied group of cases has been examined.

· ∴ Probably all electrical cases follow Mho’s Law.

Even after 1000 experiments, there still are an infinity of theories that give the same test results in these 1000 cases but conflicting results in further cases. And we could “prove,” using an inductive argument, that each of these incompatible theories is probably true. But this is absurd. We can’t have each of an infinity of conflicting theories be probably true. Our sample-projection syllogism thus leads to absurdities. 0108

We got around this problem in the scientific-theory case by appealing to simplicity: “Other things being equal, we ought to prefer a simpler theory to a more complex one.” While “simpler” here is vague and difficult to explain, we seem to need some such simplicity criterion to justify any scientific theory.

Simplicity is important in our diamond case, since 1 is simpler than 2:

1. All diamonds are hard.

2. All diamonds are such that they are hard-if-and-only-if-they-were-examined-before-the-year-2222.

By our simplicity criterion, we ought to prefer 1 to 2, even if both have equally strong inductive backing. So the sample-projection syllogism seems to need a simplicity qualification too; but it’s not clear how to formulate it.

So it’s difficult to formulate clear inductive-logic principles that don’t lead to absurdities. Inductive logic is less neat and tidy than deductive logic.

Our second problem is how to justify inductive principles. For now, let’s ignore the problem we just talked about. Let’s pretend that we have clear inductive principles that roughly accord with our practice and don’t lead to absurdities. Why follow these principles?

Consider this inductive argument (which says roughly that the sun will probably come up tomorrow, since it has come up every day in the past):

· All examined days are days in which the sun comes up.

· A large and varied group of days has been examined.

· Tomorrow is a day.

· ∴ Probably tomorrow is a day in which the sun comes up.

Even though the sun has come up every day in the past, it still might not come up tomorrow. Why think that the premise gives good reason for accepting the conclusion? Why accept this or any inductive argument?

David Hume several centuries ago raised this problem about the justification of induction. We’ll discuss five responses.

1. Some suggest that, to justify induction, we need to presume that nature is uniform. If nature works in regular patterns, then the cases we haven’t examined will likely follow the same patterns as the ones we have examined.

There are two problems with this suggestion. First, what does it mean to say “Nature is uniform”? Let’s be concrete. What would this principle imply about the regularity (or lack thereof) of Chicago weather patterns? “Nature is uniform” seems either very vague or clearly false.

Second, what’s the backing for the principle? Justifying “Nature is uniform” by experience would require inductive reasoning. But then we’re arguing in a circle – using the uniformity idea to justify induction, and then using induction to justify the uniformity idea. This presumes what’s being doubted: that it’s reasonable to follow inductive reasoning in the first place. Or is the uniformity idea perhaps a self-evident truth not in need of justification? But it’s 0109 implausible to claim self-evidence for a claim about what the world is like.

2. Some suggest that we justify induction by its success. Inductive methods work. Using inductive reasoning, we know what to do for a toothache and how to fix cars. We use such reasoning continuously and successfully in our lives. What better justification for inductive reasoning could we have than this?

This seems like a powerful justification. But there’s a problem. Let’s assume that inductive reasoning has worked in the past; how can we then conclude that it probably will work in the future? The argument is inductive, much like our sunrise argument:

· Induction has worked in the past.

· ∴ Induction probably will work in the future.

· The sun has come up every day in the past.

· ∴ The sun probably will come up tomorrow.

So justifying inductive reasoning by its past success is circular; it uses inductive reasoning and thus presupposes that such reasoning is legitimate.

3. Some suggest that it’s part of the meaning of “reasonable” that beliefs based on inductive reasoning are reasonable. “Reasonable belief” just means “belief based on experience and inductive reasoning.” So it’s true by definition that beliefs based on experience and inductive reasoning are reasonable.

There are two problems with this. First, the definition is wrong. It really isn’t true by definition that all and only things based on experience and inductive reasoning are reasonable. There’s no contradiction in disagreeing with this – as there would be if this definition were correct. Mystics see their higher methods as reasonable, and skeptics see the ordinary methods as unreasonable. Both groups might be wrong, but they aren’t simply contradicting themselves.

Second, even the correctness of the definition wouldn’t solve the problem. Suppose that standards of inductive reasoning are built into the conventional meaning of our word “reasonable.” Suppose that “reasonable belief” simply means “belief based on experience and inductive reasoning.” Then why follow what’s “reasonable” in this sense? Why not instead follow the skeptic’s advice and avoid believing such things? So this semantic approach doesn’t answer the main question: Why follow inductive reasoning at all?

4. Karl Popper suggests that we avoid inductive reasoning. But we seem to need such reasoning in our lives; without inductive reasoning, we have no basis for believing that bread nourishes and arsenic kills. And suggested substitutes for inductive reasoning don’t seem adequate.

5. Some suggest that we approach justification in inductive logic the same way we approach it in deductive logic. How can we justify the validity of deductive principles like modus ponens (“If A then B, A ∴ B”)? Can we prove such principles? Perhaps we can prove modus ponens by doing a truth table (§6.6) and then arguing this way: 0110

· If the truth table for modus ponens never gives true premises and a false conclusion, then modus ponens is valid.

· The truth table for modus ponens never gives true premises and a false conclusion.

· ∴ Modus ponens is valid.

Premise 1 is a necessary truth and premise 2 is easy to check. The conclusion follows. Therefore, modus ponens is valid. But the problem is that the argument itself uses modus ponens. So this attempted justification is circular, since it presumes from the start that modus ponens is valid.

Aristotle long ago showed that every proof must eventually rest on something unproved; otherwise, we’d need an infinite chain of proofs or else circular arguments – and neither is acceptable. So why not just accept the validity of modus ponens as a self-evident truth – a truth that’s evident but can’t be based on anything more evident? If we have to accept some things as evident without proof, why not accept modus ponens as evident without proof?

I have some sympathy with this approach. But, if we accept it, we shouldn’t think that picking logical principles is purely a matter of following “logical intuitions.” Logical intuitions vary enormously among people. The pretest that I give shows that most beginning logic students have poor intuition about the validity of simple arguments. But even though untrained logical intuitions differ, still we can reach agreement on many principles of logic. Early on, we introduce the notion of logical form. And we distinguish between valid and invalid forms – such as these two:

Modus ponens

· If A then B Valid

· A

· ∴ B

Affirming the consequent

· If A then B Invalid

· B

· ∴ A

Students at first are poor at distinguishing valid from invalid forms. They need concrete examples like these:

· If you’re a dog, then you’re an animal. Valid

· You’re a dog.

· ∴ You’re an animal.

· If you’re a dog, then you’re an animal. Invalid

· You’re an animal.

· ∴ You’re a dog.

After enough well-chosen examples, the validity of modus ponens and the invalidity of affirming the consequent become clear.

So, despite the initial clash of intuitions, we eventually reach clear logical principles of universal rational appeal. We do this by searching for clear formulas that lead to intuitively correct results in concrete cases without leading to any clear absurdities. We might think that this procedure proves modus ponens: 0111

· If modus ponens leads to intuitively correct results in concrete cases without leading to any clear absurdities, then modus ponens is valid.

· Modus ponens leads to intuitively correct results in concrete cases without leading to any clear absurdities.

· ∴ Modus ponens is valid.

But this reasoning itself uses modus ponens; the justification is circular, since it presumes from the start that modus ponens is valid. So this procedure of testing modus ponens by checking its implications doesn’t prove modus ponens. But I think it gives a “justification” for it, in some sense of “justification.” This is vague, but I don’t know how to make it more precise.

I suggested that we justify inductive principles the same way we justify deductive ones. Realizing that we can’t prove everything, we wouldn’t demand a proof. Rather, we’d search for clear formal inductive principles that lead to intuitively correct results in concrete cases without leading to any clear absurdities. Once we reached such inductive principles, we’d rest content with them and not look for any further justification.

This is the approach that I’d use in justifying inductive principles. But the key problem is the one discussed earlier. As yet we seem unable to find clear formal inductive principles that lead to intuitively correct results in concrete cases without leading to any clear absurdities. We just don’t know how to formulate inductive principles very rigorously. This is what makes the current state of inductive logic intellectually unsatisfying.

Inductive reasoning has been very useful. Inductively, we assume that it will continue to be useful. In our lives, we can’t do without it. But the intellectual basis for inductive reasoning is shaky.

If you find an error or have any questions, please email us at admin@erenow.org. Thank you!