Assumptions, Hypotheses, and Antecedents

Vladan Djordjevic

This paper is about the distinction between arguments and conditionals, and the corresponding distinction between premises and antecedents. I will also propose a further distinction between two different kinds of argument, and, correspondingly, two kinds of premise that I will call "assumption" and "hypothesis." The distinction between assumptions, hypotheses, and antecedents is easily made in artificial languages, and we are already familiar with it from our first logic courses (although not necessarily under those names, since there is no standard terminology for the distinction). After explaining their differences in artificial languages, I will argue that there are ordinary-language counterparts of these three notions, meaning that some formal properties of the artificial notions nicely capture some features of the ordinary-language counterparts and their behavior in contexts of reasoning. My next crucial claim is that these three notions often get confused in ordinary language, which leads to problems for translation into symbols. I will suggest a solution to the translation problem by pointing to some distinctive characteristics of the three notions that link them to their artificial-language counterparts. Next, I will argue that this confusion is behind some well-known philosophical problems and puzzles. I will apply the distinctions in order to explain away some famous paradoxes: the direct argument (also known as or-to-if inference), a standard argument for fatalism, and McGee's counterexample to modus ponens. As Stalnaker also solved the first two of these paradoxes by using his theory of reasonable inference, I will elucidate the similarities between our solutions, and also explain why my distinctions apply more broadly, to some cases involving indicative and counterfactuals conditionals, where reasonable inference does not apply.

Arguments that preserve truth and arguments that preserve validity have different formal properties. Based on that difference, I will consider them as two different kinds of argument and use different names for their premises (“hypotheses” and “assumptions,” respectively). I will argue that the distinction between these two kinds is more useful than has been generally recognized, and that we can benefit from it in our attempts to do logic of natural language. I will also consider another old distinction: that between arguments and conditionals. There are thus three things to distinguish—two kinds of argument, and conditionals. Section 1 of this paper is about their distinctive formal properties in artificial languages, especially in classical logic and in standard conditional logics (for indicative and counterfactual conditionals). Section 2 points to a difficulty in translating arguments and conditionals from ordinary language into symbols. The “if … then …” construction is common to them, which means that we lack a syntactic mark to distinguish them in ordinary language, and have to find something else to guide our translation. I will suggest a new method of translating. Next, I will claim that our tendency to confuse these three things is behind a number of paradoxes. In particular, the or-to-if problem (also known as the direct argument), a standard argument for fatalism, and McGee’s counterexample to modus ponens will be discussed in detail. Other, related issues, such as Kolodny and MacFarlane’s rejection of modus ponens, and Yalcin’s counterexample to modus tollens, will be briefly mentioned. Using my threefold distinction, I will attempt to explain away these paradoxes. Finally, I will compare my threefold distinction to Stalnaker’s twofold distinction between valid and reasonable inference.

1 The Distinction in Artificial Languages

  1. \[\begin{prooftree} \AxiomC{$P_1, P_2, \ldots P_n$} \UnaryInfC{$C$} \end{prooftree}\]

  2. \[\begin{prooftree} \AxiomC{$\vdash P_1, \vdash P_2, \ldots \vdash P_n$} \UnaryInfC{$\vdash C$} \end{prooftree}\]

  3. \[\begin{prooftree} \AxiomC{$\vDash P_1, \vDash P_2 \ldots \vDash P_n$} \UnaryInfC{$\vDash C$} \end{prooftree}\]

  4. \(\{P_1, P_2, \ldots P_n\} \vdash C\)

  5. \(\{P_1, P_2, \ldots P_n\} \vDash C\)

(2) claims that if the premises \(P_1, P_2, \ldots P_n\) are theorems, then so is the conclusion \(C\). (3) claims that if \(P_1, P_2, \ldots P_n\) are valid formulae, then so is the conclusion \(C\). (4) says that formula \(C\) is a syntactic consequence of the set of formulae \(\{P_1, P_2, \ldots P_n\}\), i.e. that there is a derivation of \(C\) from the set using the rules of inference, or rules and axioms, of our presupposed logical system. (5) says that \(C\) is a semantic consequence of the set of formulae \(\{P_1, P_2, \ldots P_n\}\), meaning that there is no interpretation (valuation, model, etc.) that makes \(C\) false and each formula from the set \(\{P_1, P_2, \ldots P_n\}\) true. The usual meaning of the horizontal line is truth preservation: if whatever occurs above is true, then so is the thing below. This reduces the meaning of (1) to the meaning of (5).

  1. \(P_1 \land P_2 \land \ldots P_n \rightarrow C\)

  2. \(\vdash P_1 \land P_2 \land \ldots P_n \rightarrow C\)

  3. \(\vDash P_1 \land P_2 \land \ldots P_n \rightarrow C\)

(6) is a conditional with the conjunction \(P_1 \land P_2 \land \ldots P_n\) as its antecedent and formula \(C\) as its consequent. (7) and (8) respectively claim that (6) is a theorem and a valid formula. Among (1)–(8) only (6) is entirely in the object language. (4) and (5) are metaclaims about a relation between a set of formulae and a formula. (2) and (3) are metaclaims about a relation between a set of metaclaims and a metaclaim.

The foregoing should be familiar. Now let me point to a possible terminological confusion. We tend to use the labels “premises” or “conclusion” for the object-language formulae \(P_1, P_2, \ldots P_n\) and \(C\) in all of the above arguments, including (2) and (3). (I did the same above; if you didn’t notice or if it didn’t bother you, then you have the same tendency.) Strictly speaking, this is not right. The premises and the conclusion in (4) and (5) are indeed in the object language, but this is not the case in (2) and (3); what is above and below the horizontal line in (2) and (3) belongs to the metalanguage. Given the usual meaning of the line, (2) (or (3)) says that if it is true that the object-language formulae \(P_1, P_2, \ldots P_n\) are theorems (valid), then it is true that the object-language formula \(C\) is a theorem (valid). If we keep on calling the object-language formulae “premises” or “conclusions” as the case may be, we shall have to change the meaning of the horizontal line in (2) and (3). For, in that case, it could no longer be about truth-preservation, but about theoremhood or validity-preservation. Thus when reading (2) and (3), we have to choose between the following alternatives:

  1. truth-preserving line and premises/conclusions in the metalanguage, or

  2. validity/theoremhood-preserving line and object-language premises/conclusions.

Each of these can be correctly used. (9) is more common, but I will try to show later in this section that (10) may have its own merits.

Definition 1. An assumption is an object-language formula used as a premise in an argument of the form (2) or (3).

A hypothesis is an object-language formula used as a premise in an argument of the form (4) or (5).

An argument from assumptions has the form of (2) or (3).

An argument from hypotheses has the form of (4) or (5).

A conclusion is the whole object-language formula occurring to the right of the turnstile, or below the line in arguments of the form (2)–(5).

A single line is the usual truth-preserving line.

A double line does not indicate preservation of truth but preservation of some other special status, such as theoremhood or validity.

Having made these stipulations, I shall now comment on the choice between (9) and (10). Obviously, Definition 1 relies on (10), since all premises are said to belong to the object language. In that case, it is the line that makes the difference between the two types of arguments: whereas arguments from hypotheses claim that the conclusion inherits truth from the premises, arguments from assumptions claim that the conclusion inherits some special modal status from the premises. There is, however, no reason to restrict ourselves to only one kind of line—both are clear and both can be useful. (A third line might be introduced to stand for derivability and capture the meaning of (4), but for my present purposes two will be enough.) So, it would be better to reformulate our dilemma thus:

  1. premises/conclusions sometimes in metalanguage (2, 3) sometimes in object language (1, 5), arguments always truth-preserving, or

  2. premises/conclusions always in object language, arguments sometimes truth-preserving (1, 5), sometimes preserving special status (2, 3).

Choosing (12) over (11) might be preferable for the following reason. We apply names, such as “modus ponens” or “disjunctive syllogism” (and other such names for argument-forms) to both arguments from assumptions and arguments from hypotheses. What identifies arguments (such as modus ponens or disjunctive syllogism etc.) is their form. What identifies the form of an argument is the form of the premises and the conclusion. If this is so, choosing (12) and keeping both kinds of lines from Definition 1 enables us to say that all of the following are instances of modus ponens:

\[\begin{prooftree} \AxiomC{$\vDash A$, $\vDash A \rightarrow C$} \UnaryInfC{$\vDash C$} \end{prooftree}\]

\[\{A, A \rightarrow C\} \vDash C\]

Image based on the LaTeX code:\begin{prooftree} \AxiomC{$A$, $A \rightarrow C$} \doubleLine \UnaryInfC{$C$} \end{prooftree}

\[\begin{prooftree} \AxiomC{$A$, $A \rightarrow C $} \UnaryInfC{$C$} \end{prooftree}\]

Therefore, choosing (12) over (11) enables us to talk about different kinds of argument having the same form.

Note that this fits our informal practice in logic; although by “modus ponens” we usually mean an argument from hypotheses, we often say, for example, that the Hilbert-style axiomatization of propositional logic uses modus ponens as a rule of inference.1 That rule (called the “rule of implication” by Hilbert and Ackermann 1950, 28) is an argument from assumptions: it says that if both a material implication and its antecedent are theorems, then so is its consequent.

Now I would like to point to certain formal properties of assumptions, hypotheses and antecedents, and I will do that in the following subsections. Before that, I will limit the types of logical systems I have in mind. Although my claims will hold for many more systems, it will be easier if we restrict our attention to a limited number. Because of the nature of the paradoxes that will be discussed in this paper, my main concern is with conditional logics, i.e. logics for indicative and counterfactual conditionals. What we might call a “typical” or “standard” conditional logic is based on some modal logic, which in turn is based on classical propositional logic (\(PL\)). Not any modal logic will do. The box will need to have some formal properties that capture enough features of (meta)physical or logical necessity, so usually some alethic normal modal system is used, such as \(T\) or \(S5\), or some system between the two. Adding the so-called selection function to such a modal system gives us a typical conditional logic. The role of that function is to select desired possible worlds needed for evaluating the truth value of a conditional: \(A \rightarrow C\) is true at a world \(\alpha\) iff \(C\) is true in all of the selected worlds where \(A\) is true.2

Unless explicitly stated otherwise, from now on, our presupposed logical systems are \(PL\), a modal logic based on \(PL\), such as \(T\), \(S5\), or a system stronger than \(T\) and weaker than \(S5\), and the “typical” conditional logic based on such a modal logic.

1.1 Differences between Hypotheses and Antecedents

Arguments and conditionals are similar. We can use “if … then …” to express either when we talk informally. However, accepting the truth of a conditional and accepting an argument are different things, like particular and universal claims. Let \(M\) be a model, or an interpretation, or a world, or a valuation. Then \(M \vDash A \rightarrow C\) claims that \(A \rightarrow C\) is true relative to \(M\), while an argument with \(A\) as premise and \(C\) as conclusion is acceptable/valid if and only if there is no counterexample in any possible model (interpretation/world/valuation). Thus, we have an obvious difference between a true conditional and its corresponding argument. The validity of an argument with hypothesis \(A\) and conclusion \(C\) entails the truth of \(A \rightarrow C\), but not the other way around. Conditionals can be true necessarily or contingently. Arguments are valid necessarily or not at all.

In cases where a conditional is valid, or is a theorem, the main thing that reveals the differences or similarities between conditionals and corresponding arguments and between premises and antecedents is the deduction theorem. (13) and (14) below give us the form of the theorem in the case of material implication (“\(\supset\)”).

  1. If \(\{P_1, P_2, \ldots, P_n\} \vdash C\) then \(\{P_1, P_2 ... P_{n-1}\} \vdash P_n \supset C\)

  2. If \(\{P_1, P_2, \ldots, P_n\} \vdash C\) then \(\{P_1, P_2 ... P_{n-1}\} \vDash P_n \supset C\)

(13) and (14) are metatheorems of \(PL\), and so is the converse of each. Before considering more general cases, let us first take \(n=1\) to compare arguments with one premise and corresponding conditionals. In the case of material implication, it is easy to pass from proven implications to arguments, and conversely:

  1. \(\{A\} \vdash C\) iff \(\vdash A \supset C\), and

  2. \(\{A\} \vDash C\) iff \(\vDash A \supset C\)

Thus, the deduction theorem and its converse inform us about the relation between antecedents of proven/valid material implications and hypotheses, a relation that does not hold between antecedents of proven/valid material implications and assumptions. For example, the rule of necessitation allows us to infer \(\vdash\Box A\) from \(\vdash A\), but \(\vdash A \supset\Box A\) does not hold. Therefore, there is no significant difference between antecedents and hypotheses in (15) and (16), but there is still a significant difference between antecedents and assumptions.

The typical conditional logic defines a conditional that is stronger than material implication and weaker than strict implication, in this sense (the arrow stands for the conditional):

  1. \(\vDash\Box (A \supset C) \supset (A \rightarrow C)\) and \(\vDash (A \rightarrow C) \supset (A \supset C)\)

The converse of (17) is not valid, i.e. the conditional does not follow from the material implication, nor does it entail the strict implication. Using (17) and the deduction theorem and its converse for “\(\supset\)” we can prove that an analogue of (15) and (16) holds for the conditional as well:

  1. \(\{A\} \vdash C\) iff \(\vdash A \rightarrow C\), and

  2. \(\{A\} \vDash C\) iff \(\vDash A \rightarrow C\)

Thus again, there is no significant difference between the antecedents of a valid/proven conditional and the corresponding hypothesis in (18) and (19). There is still the same important difference between assumptions and antecedents of conditionals, for the same reason.

So far, we have considered cases where the number of premises \(n=1\). For an arbitrary number of premises things get more complicated, since the deduction theorem for the conditional can easily fail. Consider:

  1. \(a\) \(\{\neg A \lor C, A\} \vDash C\) from \(PL\)
    \(b\) \(\{\neg A \lor C\} \vDash A \rightarrow C\) from \(a\) by the deduction theorem for \(\rightarrow\)
    \(c\) \(\vDash \neg A \lor C \supset (A \rightarrow C)\) from \(b\) by the deduction theorem for \(\supset\)
    \(d\) \(\vDash (A \supset C) \supset (A \rightarrow C)\) from \(c\) by \(PL\)
    \(e\) \(\vDash (A \rightarrow C) \supset (A \supset C)\) from 17
    \(f\) \(\vDash (A \rightarrow C) \equiv (A \supset C)\) from \(d\) and \(e\) by \(PL\)

(20.\(f\)) reduces the arrow to the horseshoe and must be rejected if we want to keep the difference between the two connectives. Step (20.\(e\)) amounts to the claim that modus ponens is valid for the conditional. If we assume that a conditional is not a conditional without modus ponens, then (20.\(e\)) cannot be rejected. Rejecting any other step beside (20.\(b\)) would require a change in the basic (propositional or modal) logic. So, the smallest price is to reject (20.\(b\)).

The converse of the deduction theorem amounts to the claim that modus ponens holds for the implication or conditional in question. Since modus ponens is considered to hold trivially in typical conditional logics, so does the metatheorem that claims that modus ponens holds. Therefore, the converse of the deduction theorem holds for both horseshoe and arrow. However, since the deduction theorem for “\(\rightarrow\)” does not generally hold, relations between arguments and conditionals differ from the relations between arguments and material implications. We can see that hypotheses move easily around the turnstile in the case of material implication: \[\{P_1, P_2,\ldots,P_m,\ldots,P_n\} \vDash C\] if and only if \[\{P_1, P_2,\ldots,P_m\} \vDash (P_{m+1} \supset (P_{m+2} \supset \ldots (P_n \supset C)\ldots))\] if and only if \[\vDash (P_1 \supset (P_2 \supset \ldots (P_n \supset C)\ldots))\] if and only if \[\vDash P_1 \land P_2 \land\ldots P_n \supset C\] But, if we replace “\(\supset\)” with “\(\rightarrow\),” the two middle elements in this chain of equivalences have to be dropped so that only two remain: \[\{P_1, P_2,\ldots,P_m,\ldots,P_n\} \vDash C\] if and only if \[\vDash P_1 \land P_2 \land \ldots \land P_n \rightarrow C\] The reason for this is that whereas exportation and importation are valid for material implication, exportation is invalid for conditionals:3 \[\{A \rightarrow (B \rightarrow C)\} \vDash A \land B \rightarrow C \quad\textnormal{(imp.)}\] \[\{A \land B \rightarrow C\} \nvDash A \rightarrow (B \rightarrow C) \quad\textnormal{(exp.)}\]Because of this the material implication easily allows nesting in the consequent, while nesting is often problematic for conditionals. We can use our previous example to illustrate that:

\(\{\lnot A \lor C,\ A\} \vDash C\)

\(\{\lnot A \lor C\} \nvDash A \rightarrow C\)

\(\nvDash \lnot A \lor C \rightarrow (A \rightarrow C)\)

\(\vDash ((\lnot A \lor C) \land A) \rightarrow C\)

\(\{\lnot A \lor C,\ A\} \vDash C\)

\(\{\lnot A \lor C \} \vDash A \supset C\)

\(\vDash \lnot A \lor C \supset (A \supset C)\)

\(\vDash ((\lnot A \lor C) \land A) \supset C\)

Let me summarize this subsection. What is the difference between accepting a conditional and accepting an argument? We can understand this question in two ways: (a) What is the difference between accepting the truth of a conditional and the validity of an argument? Or (b) What is the difference between accepting the validity of a conditional and the validity of an argument? Let us answer first for the case of simple antecedents, i.e. arguments with only one hypothesis, and leave the more general case for later. (ad a) The validity of an argument with \(A\) as hypothesis and \(C\) as conclusion is sufficient for the truth of \(A \rightarrow C\). The truth of \(A \rightarrow C\) can be context-dependent and contingent, and is therefore not sufficient for the validity of the argument. (ad b) But the argument is valid if and only if the conditional is valid. Thus, in this case, the difference between antecedents and hypotheses (conditionals and arguments) is not significant. This would not hold if \(A\) were an assumption instead of a hypothesis. In more general cases, when we have more than one hypothesis, things are more complicated. Hypotheses cannot become antecedents by moving right from the turnstile, since the deduction theorem does not hold for conditionals. Since the converse of the deduction theorem holds, antecedents can become hypotheses by moving left from the turnstile. Hypotheses can become antecedents only all at once, i.e. if the antecedent is a conjunction of all the hypotheses, and an empty set remains on the left of the turnstile.

1.2 The Distinction between Assumptions and Hypotheses

The decision to regard both assumptions and hypotheses as object language formulae allows us to talk about the same argument-forms for different types of argument. It also makes sense of claims like the following: “conclusion \(C\) follows from \(A\) if \(A\) is taken as an assumption, but not if \(A\) is a hypothesis”; “this form is valid for arguments from hypotheses, but not for arguments from assumptions.” Often an argument-form is valid for both kinds; modus ponens, for example. Our main interest in this section is to show some forms that hold only for one kind.

1.2.1 Inferences Both Ways

The claim that two formulae are equivalent is usually expressed in symbols with a turnstile and a material biconditional: \(\vDash A \equiv B\) or \(\vdash A \equiv B\). Such an equivalence can also serve as a definition of one of the formulae, \(A\) or \(B\). For later purposes, it is important to notice that if two formulae can be inferred from each other, this double inference does not always amount to equivalence.

  1. \[\begin{prooftree} \AxiomC{$\vDash A$} \UnaryInfC{$\vDash B$} \end{prooftree}\]     and     \[\begin{prooftree} \AxiomC{$\vDash B$} \UnaryInfC{$\vDash A$} \end{prooftree}\]
  2. \(\{A\} \vDash B\)   and   \(\{B\} \vDash A\)

  3. \(\vDash A \equiv B\)

From (22) we can infer (23). We just need to apply the deduction theorem to (22):

  1. \(\vDash A \supset B\)    and    \(\vDash B \supset A\)

(24) follows from (22), and (23) follows from (24).

However, (23) does not follow from (21). Inferences both ways from assumptions do not amount to equivalence. Consider:

  1. \[\begin{prooftree} \AxiomC{$\vDash A$} \UnaryInfC{$\vDash \Box A$} \end{prooftree}\]     and     \[\begin{prooftree} \AxiomC{$\vDash \Box A$} \UnaryInfC{$\vDash A$} \end{prooftree}\]
  2. \(\vDash A \equiv \Box A\)

(25) is valid, but (26) is not.

1.2.2 Validity of some Standard Rules (transitivity, contraposition, constructive dilemma)

In conditional logics, arguments from hypotheses in the form of transitivity (hypothetical syllogism) and contraposition typically fail: \[\{A \rightarrow B, B \rightarrow C\} \nvDash A \rightarrow C\] \[\{A \rightarrow C\} \nvDash \lnot C \rightarrow \lnot A\] We will show that these forms hold for arguments from assumptions. In these proofs, we will make several suppositions about conditionals, but these suppositions are all “safe,” i.e. they trivially hold in standard conditional logics. We will suppose that the converse of the deduction theorem and modus ponens hold for \(\rightarrow\), and that strict implication entails conditional (17); also, we suppose the standard truth conditions: a conditional is true in a world iff the consequent holds in all selected antecedent-worlds. We will also require that these conditions imply that if a conditional is false in a world, then there must be an accessible world where the antecedent is true and the consequent false. Below are the syntactic and semantic versions of the proof of the transitivity of “\(\rightarrow\)”:

  1. \(a\) \(\vdash A \rightarrow B\) assumption
    \(b\) \(\vdash B \rightarrow C\) assumption
    \(c\) \(\{A\} \vdash B\) from \(a\) by the converse of the deduction theorem
    \(d\) \(\{A\} \vdash B \rightarrow C\) from \(b\) by \(PL\) (monotonicity)
    \(e\) \(\{A\} \vdash C\) from \(c\) and \(d\) by modus ponens
    \(f\) \(\vdash A \supset C\) from \(e\) by the deduction theorem for \(\supset\)
    \(g\) \(\vdash \Box (A \supset C)\) from \(f\) by necessitation
    \(h\) \(\vdash A \rightarrow C\) from \(g\) and 17 by modus ponens

Now the semantic version of transitivity. A countermodel cannot be made:

Image based on the LaTeX code:\begin{tikzpicture}[every node/.style={below, draw, label position=above, rounded corners, text width=2cm}] \node (A) at (0,0) [label=$\alpha$] {$\Box (A \rightarrow B)$\\ $\Box (B \rightarrow C)$\\ $\lnot \Box (A \rightarrow C)$}; \node (B) at (3,0) [label=$\beta$] {$\Box (A \rightarrow B)$\\ $\Box (B \rightarrow C)$\\ $\lnot (A \rightarrow C)$}; \node (C) at (6,0) [label=$\gamma$] {$A \rightarrow B$\\ $B \rightarrow C$\\ $A$\\ $\lnot C$\\ $A \supset B$\\ $B \supset C$\\ $B$\:? }; \draw[->] (A.north east) to [bend left] (B.north west); \draw[->] (B.north east) to [bend left] (C.north west) ; \end{tikzpicture}

The negated necessity in \(\alpha\) requires the existence of an accessible world (say, \(\beta\)) where the proposition which is not necessary in \(\alpha\) is false. The false conditional in \(\beta\) requires the existence of an accessible world (\(\gamma\)) where the antecedent is true and the consequent false. In \(\gamma\) the two conditionals hold (since they are necessary in a world from which \(\gamma\) is accessible), and they entail the two material implications (17). But then \(\gamma\) is an impossible world.

Thus, transitivity as an argument from assumptions holds for conditionals, and we can similarly show that contraposition holds too. However, constructive dilemma, which is a valid form for arguments from hypotheses, fails for arguments from assumptions. Consider constructive dilemma in the way it is presented in Fitch-style systems of natural deduction:

Image based on the LaTeX code:\begin{nd} \hypo[~] {1} {A \lor B} \open \hypo[~] {2} {A} \have[~] {3} {C} \close \open \hypo[~] {4} {B} \have[~] {5} {D} \close \have[~] {6} {C \lor D} \end{nd}

This rule, like the other introduction and elimination rules for each connective in natural deduction systems, is an argument from hypotheses. The assumption-version of constructive dilemma would require both sub-arguments and the main argument to be from assumptions. It might be more convenient to present the two kinds of argument Gentzen-style. So, the constructive dilemma as an argument from hypotheses looks like this:

\[\begin{prooftree} \AxiomC{$A \lor B$} \AxiomC{$A$} \UnaryInfC{$C$} \AxiomC{$B$} \UnaryInfC{$D$} \TrinaryInfC{$C \lor D$} \end{prooftree}\]

or:

\[\begin{prooftree} \AxiomC{$A \lor B$} \AxiomC{$\{A\} \vdash C$} \AxiomC{$\{B\} \vdash D$} \TrinaryInfC{$C \lor D$} \end{prooftree}\]

The constructive dilemma as an argument from assumptions looks like this (the turnstiles may be replaced by single turnstiles for a syntactic version):

\[\begin{prooftree} \AxiomC{$\vDash A \lor B$} \AxiomC{$\vDash A$} \UnaryInfC{$\vDash C$} \AxiomC{$\vDash B$} \UnaryInfC{$\vDash D$} \TrinaryInfC{$\vDash C \lor D$} \end{prooftree}\]

or, more conveniently, using the double line:

Image based on the LaTeX code:\begin{prooftree} \AxiomC{$A \lor B$} \AxiomC{$A$} \doubleLine \UnaryInfC{$C$} \AxiomC{$B$} \doubleLine \UnaryInfC{$D$} \doubleLine \TrinaryInfC{$C \lor D$} \end{prooftree}

Let us take \(\neg A\) for \(B\), \(\Box A\) for \(C\), and \(\Box \neg A\) for \(D\), and let us consider these two arguments:

\[\begin{prooftree} \AxiomC{$A \lor \neg A$} \AxiomC{$A$} \UnaryInfC{$\Box A$} \AxiomC{$\neg A$} \UnaryInfC{$\Box \neg A$} \TrinaryInfC{$\Box A \lor \Box \neg A$} \end{prooftree}\]

Image based on the LaTeX code:\begin{prooftree} \AxiomC{$A \lor \neg A$} \AxiomC{$A$} \doubleLine \UnaryInfC{$\Box A$} \AxiomC{$\neg A$} \doubleLine \UnaryInfC{$\Box \neg A$} \doubleLine \TrinaryInfC{$\Box A \lor \Box \neg A$} \end{prooftree}

From “It does or it does not rain” we should not be able to infer “It either necessarily rains or it necessarily does not rain.” The two arguments fail for different reasons. The former, the argument from hypotheses, has a valid form but the sub-arguments are invalid. The latter, the argument from assumptions, has valid sub-arguments but an invalid form.

necessitation inference both ways gives equivalence transitivity contra- position constructive dilemma
arguments from hypotheses \(\times\) \(\checkmark\) \(\times\) \(\times\) \(\checkmark\)
arguments from assumptions \(\checkmark\) \(\times\) \(\checkmark\) \(\checkmark\) \(\times\)

Let us also mention some cases where the two types of arguments match:

modus ponens modus tollens importation exportation
arguments from hypotheses \(\checkmark\) \(\checkmark\) \(\checkmark\) \(\times\)
arguments from assumptions \(\checkmark\) \(\checkmark\) \(\checkmark\) \(\times\)

2 Translation from Ordinary Language into Symbols

In this section, we turn from formal to natural language and look for counterparts of our three notions. We face an immediate difficulty. In formal language, we had no difficulty recognizing and distinguishing antecedents from premises, or conditionals from arguments. It was enough to be familiar with the syntax of the formal language. However, in natural language we do not have distinctive syntactic characteristics of conditionals and arguments because we often use “if … then …” for both. Rarely do we have conditionals and arguments expressed in an explicit form which tells us that it is one and not the other. Thus, we have a problem when we want to translate our if-constructions into symbols: when and why are we to translate them as conditionals, and when and why are we to translate them as arguments? How can we deal with this problem? Suppose we had a good/acceptable/not-obviously-false/adequate/true/ultimate theory of conditionals, i.e. a formal semantics. Such a theory would be an obvious candidate for a translation guide: it would tell us about the formal characteristics of conditionals, on the one hand, and arguments, on the other, and it would reveal how these differ (similar to what I tried to do in section 1). With these differences in mind, we would do our best to choose a charitable translation that makes the most sense in the given context.4

Let us pretend that the standard theory of conditionals, as outlined in section 1, is our theory of choice. Let us bear in mind that it is at best an outline of a theory, with huge gaps to be filled and lots of formal and informal work left to be done, and that this work must include pragmatics if we are to understand our usage of conditionals and to be able to evaluate our semantics. The outline is compatible with many formal semantics that have been proposed—some of those being very weak (in the sense that few rules involving conditionals hold), like Gabbay (1972), some being considered strong, like Stalnaker (1968). There is a chance that the reader’s favorite theory might be among them. So let us pretend that we accept the standard theory sketched in section 1, and with it everything said about the formal properties and differences of conditionals and the two kinds of argument. These formal properties will be our guide in translation from ordinary language to symbols, as I suggested in the previous paragraph.

However, we need more things to guide us. We need some characteristics of the ordinary language conditionals and arguments that would link them to their symbolic counterparts. These characteristics are the main topic of this section. I believe that an adequate theory of conditionals (based on the outlined theory we pretend to accept) would imply that antecedents, hypotheses, and assumptions have the following characteristics that I list under the label:

Thesis 2.1. The antecedent of a true indicative (counterfactual) conditional is (would be), in the given context, a sufficient condition for the truth of the consequent.

. The conjunction of hypotheses of a valid argument is, in any possible context, a sufficient condition for the conclusion.

. Assumptions of a valid argument are premises such that their special status is, in any possible context, a sufficient condition for the same status of the conclusion.

Let me explain these in turn.

My suggestion is to regard antecedents as a kind of sufficient reason for the consequent. The idea is old, but has been abandoned or forgotten. I will offer some inconclusive arguments for the claim.

First, this claim works well when applied to particular cases in the later sections of this paper.

Second, what else are antecedents if not some sufficient reasons? This is not easy to answer. As we said, the syntax of natural language cannot give the full answer as it does not distinguish premises from antecedents. We may find some help from our formal semantics and say that ordinary language antecedents are whatever is best described by the artificial language antecedents. This, however, presupposes that we already have a solution to the translation problem. In order to have a ready answer to the translation problem, a fully (or at least reasonably) developed theory with semantics and pragmatics is needed. However, many of us are still waiting for such a theory, and some are also waiting for the “right” formal semantics, even if they expect to find it within our presupposed outline from section 1.5 So, since it seems that we currently lack the “right” theory, I suggest a shortcut—namely, to empirically test the Thesis (which I suppose would follow from the “right” theory), and see if it can be helpful to the problem of translation.

Third, the idea is compatible with our outlined theory. As we said, the outline is compatible with many different semantics, and 2.1 is stated in terms vague enough, I think, to be compatible with most of these. The outline assumes a selection function. What does it do? The role of that function is to somehow separate (what a theory takes to be) relevant from irrelevant antecedent-worlds (for each antecedent and each world of evaluation). Part or all of the meaning of “relevant” should be that all propositions that express the sufficient reason (in the given context) hold at each of the relevant antecedent-worlds. Let us use Goodman’s old example with the match m (Goodman 1947; cited from Goodman 1983). Let \(A\) = “the match m is struck,” \(C\) = “the match m lights,” and let both \(A\) and \(C\) be false. Let \(B_1\) = “m is dry,” \(B_2\) = “m is well-made,” \(B_3\) = “oxygen enough is present,” and \(B_4\) = “All dry, well-made matches light when struck in the presence of enough oxygen.” Let \(B_{1-4}\) be true; they describe the “given context” (or some part of it, depending on the chosen theory of conditionals). The conditional “Had m been struck, it would have lit” (\(A \rightarrow C\)) is true in the described situation. The proposition \(A\) is, in the given context (which is here described by \(B_{1-4}\)), sufficient for the truth of \(C\). Our favorite theory, since it is a sensible theory, selects the relevant \(A\)-worlds in such a way that all of \(B_{1-4}\) hold at each of them (we are obviously not interested in \(A\)-worlds where the match is not properly made, where different natural laws hold, or where matches are being lit by being put in tomato juice). C would hold in each of these worlds, and our theory gives the right truth value of the conditional.

Of course, “sufficient in the given context” works differently for counterfactuals and for indicative conditionals. The latter are epistemic, and the selected \(A\)-worlds can be different, either because we use different selection functions or because one function depends on different contextual parameters for the two kinds of conditionals. Suppose we know \(B_{1-4}\), we do not see the match, and have no beliefs about \(A\) and \(C\). Then we would accept “If m was struck, then it lit,” for the same reasons we have accepted the analogue counterfactual above. However, if we hold the match and see that it never lit, that is, we know \(\neg C\), and further have no beliefs about \(A\) and \(B_2\) but know \(B_1\), \(B_3\) and \(B_4\), we would reject that indicative conditional (being convinced that no sufficient reason for the lighting could have possibly obtained) and would rather accept a contrary conditional \(A \rightarrow \neg B_2\), i.e. “If m was struck, then it was not well made.” In this case \(\neg C\), \(B_1\), \(B_3\) and \(B_4\) would hold in every selected \(A\)-world. Also, \(\neg C\), \(B_1\), \(B_3\) and \(B_4\) would now determine “the given context,” and \(A\) is in that context sufficient for \(\neg B_2\).6

A fourth reason in favor of 2.1 might be this. Sufficient reasons are good for explanations. If asked why conditionals have the truth value they have, the answer may convincingly be cashed in terms of sufficient conditions. For example, why is the counterfactual considered above “Had m been struck, it would have lit” true? We could offer \(B_{1-4}\) as explanation (noting that here the antecedent, together with \(B_{1-4}\), is sufficient for the consequent). If asked why the indicative “If m was struck, then it was not well made” is true, we could offer \(\neg C\), \(B_1\), \(B_3\) and \(B_4\) as explanation. It would be good for our formal semantics if the truth conditions were related to explanations of truth values. Saying that \(A\rightarrow C\) is true because \(C\) holds in the selected worlds is not an explanation, unless we know that the selection function can be interpreted as if it picks up the antecedent-worlds where the explanation holds. If we do not know that, or worse, cannot know that, then why use such a selection function? Worse still, if we do know that the explanation cannot hold in all of the selected antecedent-worlds, that would be a good reason to reject the semantics.7

However, I am aware that I cannot please everyone. For example, if you prefer a unified theory of conditionals that includes all or most if-constructions, you will not be pleased with my 2.1. In particular, “even if” conditionals certainly do not go well with 2.1. In addition, 2.1 is meant to work primarily for contingent antecedents and consequents. To make things simpler, I will stipulate that a conditional is vacuously true if the antecedent is impossible or the consequent necessary (which accords with standard conditional logic anyway). There have always been philosophers who do not like that, and their number seems to be growing. Still, in spite of different views we might have, hopefully you will find something of interest in my paper. Different approaches to conditionals, or theories of conditionals, may nevertheless agree about a large and important class of conditionals. There is a chance that the conditionals occurring in the paradoxes that I will discuss below belong to such a class and that we agree about them.

Let us now turn to the “special status,” which, according to the Thesis, makes the difference between assumptions and hypotheses. In Definition 1, we mentioned two special statuses of assumptions—validity and theoremhood. Both valid propositions and theorems are necessary, so we may count logical necessity as the third special status preserved by arguments from assumptions. In artificial language, arguments from hypotheses went from premises to conclusion; arguments from assumptions went from the special status of premises to the same status of the conclusion. My suggestion is that there are analogue situations in ordinary language. Sometimes we argue from premises or a premise to conclusion, say from \(P\) to \(C\): we suppose \(P\) and claim that \(C\) follows. Sometimes, however, we do not simply suppose \(P\); we suppose that \(P\) cannot be false. Consequently, our supposition is not \(P\) itself but a claim about a modal qualification of \(P\), that is, our supposition is that \(P\) has a certain modal status. When we suppose that \(P\) cannot be false, we rule out the possibility of \(\neg P\), that is, we treat \(P\) as if it were necessary. In that case, the result of our inference has to be stronger than \(C\)—it has to be that \(C\) inherits the same modal status. Because of that, such arguments should be translated into symbols as arguments from assumptions, i.e. as necessity-preserving arguments, not as truth-preserving arguments from hypotheses.

Pragmatics teaches us that in every conversation something is taken for granted8 and that some possibilities are ignored.9 I am here especially interested in cases where a contingent proposition is taken for granted, and its negation is ruled out of consideration. This can happen for various reasons. The most obvious case is when we explicitly agree to suppose something, say \(P\). As long as \(P\) holds as a supposition, in a smooth conversation we do not call it into question, nor do we consider \(\neg P\) as a possibility. For that part of our conversation \(P\) is treated as if it were necessary. But \(P\) does not need to be stated explicitly in order to be treated as if it were necessary—it could be a presupposition, or a part of the common ground.10 The negation of \(P\) might not belong among what Lewis called relevant possibilities in a conversation. Thus we can say that there are, in ordinary language, propositions whose negation is ignored and which are treated as if they were necessary. So we gain another candidate for the special status that may be preserved by the arguments from assumptions. It is epistemic necessity. The other three (validity, theoremhood, and logical necessity) are more likely to occur in an artificial language, while epistemic necessity is more suitable as a status of ordinary language assumptions.

What is the exact nature of that necessity? What are its formal properties? Can the answer to that question give a full or only partial answer to the next question (which is my main concern here): what are the formal properties of arguments that preserve that kind of necessity? I wish I could answer. These are million-dollar questions, and what I am able to offer here is far from a complete answer. Arguments that preserve different kinds of necessity may share some formal properties (for example, the rule that necessity entails truth is common to logical and physical necessity). Sometimes, they may share all their formal properties (maybe this is the case with logical and metaphysical necessity—the system \(S5\) is sometimes said to capture one, sometimes the other of these two senses of necessity).11 Epistemic necessity might not be a “real” necessity, in a logical or (meta)physical sense. However, in the context of reasoning it might well behave as a “real” necessity. If always or only sometimes, I do not know. But here is what I suggest. Let us assume that the formal properties from the two tables at the end of section 1 are common to all arguments from assumptions that preserve different kinds of special status.12 Next, when we realize that our ordinary language premise or if-clause is not simply \(P\), but the claim that \(P\) has special status, we should translate our argument or if-construction into symbols using arguments from assumptions, not conditionals nor arguments from hypotheses. In general, when translating our if-constructions into symbols, we need to figure out which of 2.1, 2.2, and 2.3 is intended by our if-clause, and translate accordingly. My last suggestion is that we put the previous suggestions to the test. The proof of the pudding is in the eating. So let us test the distinction between assumptions, hypotheses, and antecedents on some paradoxes.

3 Case 1: the Direct Argument

The so-called “horseshoe-analysis” (\(\supset\)‑analysis to be shorter) says that natural-language indicative conditionals are material implications, or that the truth conditions for indicative conditionals are the same as the truth conditions for material implication. This theory has always had its supporters, maybe since the time of Philo, but certainly since the time of Grice,13 albeit (it seems) as a minority. The Direct Argument (DA), which allegedly supports the \(\supset\)‑analysis, goes like this:

(DA) \(A \lor B \textnormal{ entails } \neg A \rightarrow B\)

Stalnaker said this about DA:

This piece of reasoning—call it the direct argument—may seem tedious, but it is surely compelling. Yet, if it is a valid inference, then the indicative conditional conclusion must be logically equivalent to the truth-functional material conditional [… because] the argument in the opposite direction—from the indicative conditional to the material conditional—is uncontroversially valid. […] and this conclusion [i.e. the \(\supset\)‑analysis] has consequences that are notoriously paradoxical [… and] must be explained away by anyone who wants to defend the thesis that the direct argument is valid. Yet anyone who denies the validity of that argument must explain how an invalid argument can be as compelling as this one seems to be. […] There are thus two strategies that one may adopt to respond to this puzzle: defend the [\(\supset\)‑analysis] and explain away the paradoxes of the material implication, or reject the [\(\supset\)‑analysis] and explain away the force of the direct argument. (1975; cited from Stalnaker 1999, 63. The square brackets have been added to the original.)

Stalnaker adopted the second strategy. I will do the same here, in a different way.

What kind of argument is DA? It is obviously supposed to be an argument from hypothesis in Stalnaker’s paper, but let us consider both possibilities—DA as an argument from hypotheses (DAh), and DA as an argument from assumptions (DAa). Let us further note the fact that DAh is invalid in the standard conditional logic, and that DAa is valid. Following what Stalnaker said and implied in his paper,14 in solving paradoxes, pointing to a mistake is the smaller part of the job. The main part is to explain why it is a mistake and why it has not been noticed. The standard logic already did the smaller part by rejecting DAh. Let us turn to the main part.

If the disjunction is understood as an assumption, i.e. if it has to be that either \(A\) or \(B\) is the case, and the possibility of the disjunction being false is ruled out of consideration, then it has to be that if it is not one disjunct, it is the other. So DAa sounds good. It seems strange to say: “Under the assumption that \(A \lor B\), if \(A\) is false, maybe \(A \lor B\) is false as well … So it might not be the case that \(B\) is true if \(A\) is false.” The strangeness may be explained by noting that it is a case of making an assumption and canceling it in the same breath. It is usually not done, because it is not clear what would be the purpose of introducing an assumption and immediately giving it up. Of course, in the dynamics of a conversation presuppositions may be introduced for some part of the conversation and then canceled. But we are now discussing the validity of an argument, and we are not interested in the part of the conversation in which our premise has been canceled. Our premise says that we are limited to considering the situations where \(A \lor B\) is true, and other possibilities are being ignored. The premise can be canceled, but as long as it holds, we cannot reject the conclusion \(\neg A \rightarrow B\), because the antecedent cannot bring into consideration scenarios that are outside of the presupposed limit. In terms of the formal semantics, the assumption ruled out the possible worlds where the disjunction is false, so the selection function cannot select any such world. (If the antecedent does bring in possibilities from beyond the limit, this amounts to canceling the premise, and such cases are irrelevant for evaluating DAa; formally speaking, if the conclusion is evaluated after the premise has been canceled, then the premise and the conclusion are not evaluated in the same model.)

Things are different, however, if the disjunction is understood as a hypothesis. Nothing is presupposed about the modal status of a hypothesis, so there is no limit to possible scenarios (the selection function is not limited to the possible worlds where the hypothesis is true). In considering whether \(\neg A \rightarrow B\) follows from the hypothesis \(A \lor B\), we might say that our antecedent might point to situations where the disjunction is not true, so it may be false that \(B\) is the case if \(\neg A\) is. This does not mean that the antecedent cancels the premise (i.e. the premise and the conclusion can be evaluated in the same model). The hypothesis is about the actual situation (or about the situation in whichever the world of evaluation is) and the antecedent may (but need not) be about the actual situation. Therefore, the hypothesis \(A \lor B\), even if true, is not sufficient, in every possible context, for \(\neg A \rightarrow B\). This might be a justification for considering DAh invalid and DAa valid.

What does this mean for the relation between DA and \(\supset\)‑analysis? \(\supset\)‑analysis may be represented as a biconditional:

\(\vDash (A \supset B) \equiv (A \rightarrow B)\)

or, which is the same:

\(\vDash (\neg A \lor B) \equiv (A \rightarrow B)\)

or, if we substitute \(A\) for \(\neg A\) for convenience:

(\({\supset}\)‑a) \(\vDash (A \lor B) \equiv (\neg A \rightarrow B)\)

We will take \(\supset\)‑a as expressing the \(\supset\)‑analysis.

\(\supset\)‑a is a biconditional consisting of two implications:

  1. \(\vDash (A \lor B) \supset (\neg A \rightarrow B)\)

  2. \(\vDash (\neg A \rightarrow B) \supset (A \lor B)\)

One half of \(\supset\)‑a, (29), is considered trivial (assuming that modus ponens is valid for the arrow). Applying the converse of the deduction theorem to (28) gives us DAh:

(DAh) \(\{A \lor B\} \vDash \neg A \rightarrow B\)

Therefore, DA is said to support the \(\supset\)‑analysis because DAh plus two trivialities (the deduction theorem for \(\supset\) and (29)) imply \(\supset\)‑a.

On the other hand, DAa does not support the \(\supset\)‑analysis:

(DAa) Image based on the LaTeX code:\begin{prooftree} \AxiomC{$A \lor B$} \doubleLine \UnaryInfC{$\neg A \rightarrow B$} \end{prooftree}

(converse DAa) Image based on the LaTeX code:\begin{prooftree} \AxiomC{$\neg A \rightarrow B$} \doubleLine \UnaryInfC{$A \lor B$} \end{prooftree}

Both DAa and its converse are valid, but this two-way inference does not entail the equivalence \(\supset\)‑a (as shown in section 1.2.1).

Thus my suggestion is that the DA problem can be explained away by pointing to an equivocation. Arguments from assumptions and arguments from hypotheses can be easily confused in ordinary language. The reason why DA may appear compelling is because it is understood as DAa. In that case, however, DA does not support the \(\supset\)‑analysis. It does support the \(\supset\)‑analysis only if understood as DAh, which is less compelling (or not at all). Therefore, DA is either not compelling (understood as DAh) or if it is compelling (understood as DAa), then it has nothing to do with \(\supset\)‑analysis. When translating DA into symbols we should pay attention to the exact intended meaning of our premise: do we suppose simply \(A \lor B\) or do we suppose that anything opposing \(A \lor B\) is ruled out of consideration (i.e. that \(A \lor B\) must hold)? We should render DA as DAh in the first case, and as DAa in the second.

What did I exactly achieve or plan to achieve here? I have provided reasons for thinking that DAh is not compelling, but I cannot say that I have proved that DAh is invalid. One can hardly expect a conclusive proof of a thing like that. In my view, such basic rules of inference are to be evaluated together with the comprehensive theories to which they belong. Opposing comprehensive theories, such as those based on the \(\supset\)‑analysis and those based on the standard theory outlined above, are to be tested empirically and evaluated according to their overall success. A “proof” of a rule of inference would then be its belonging to a more successful theory. Obviously, I did not say nearly enough to estimate which approach is more successful. So I am not here in the business of proving or disproving the \(\supset\)‑analysis. However, I believe that I have scored a point for the standard theories: having noted the fact that DAh is invalid and DAa is valid in standard logics, I argued that such theories have semantic and pragmatic means to justify that fact and to explain away the DA problem (with the aid of my distinctions and Thesis).

That completes what I have to say about the DA problem, as it is usually presented in the literature. I will add just a few words about counterfactuals. DA is said to be a problem for indicative conditionals and not for counterfactuals, because the counterfactual version of DAh is said not to be as compelling as the indicative version, or maybe not compelling at all.15 I do not know the exact reason for that claim, but here is my guess as to what might be behind it. Analogous to the indicative versions, DAh is invalid and DAa valid for counterfactuals in standard logics. If asked to explain whether this is good or bad for standard logics, I would say that it is good. My explanation would be exactly analogous to the explanation I gave above for the indicative versions. All the details would remain the same. Whence, then, comes the difference in intuitive acceptability of the two versions? A typical indicative has an antecedent that is not known to be true or false. A typical counterfactual points to a counterfactual situation by an antecedent known to be false. For that reason, it might be easier to cancel presuppositions, assumptions, and premises by using a counterfactual than by using an indicative conditional. My guess is that the counterfactual version of DAh appears to be less compelling because its premise looks more easily cancelable by the antecedent of the conclusion, which is why the premise does not seem to ensure the truth of the conclusion.

Whether or not my guess is right, such reasoning is not correct. When evaluating an argument, we are interested in what holds under the premise. There is no point in looking at what holds after the premise has been canceled. In explaining the indicative version, I noted that the premise has not been canceled in either case: neither in the explanation of the validity of DAa nor in the explanation of a possible counterexample to DAh. It can happen, of course, in some conversations that a premise gets canceled by the conclusion, but then we do not have a counterexample.

4 Case 2: A Standard Argument for Fatalism

Let us consider what Dummett (1964, 345) called a standard argument for fatalism. Stalnaker, who considered the same argument (1975; see the reprint 1999, 74f), presented it in the form of natural deduction (this means that the main argument and the sub-arguments are from hypotheses):

Image based on the LaTeX code:\begin{nd} \hypo[a] {1}{\textnormal{Killed}\lor \neg\textnormal{Killed}} \open \hypo[b] {2}{\textnormal{Killed}} \have[c] {3}{\textnormal{Precautions} \rightarrow \textnormal{Killed}} \have[d] {4}{\textnormal{Ineffective}} \close \open \hypo[e] {5}{\neg \textnormal{Killed}} \have[f] {6}{\neg \textnormal{Precautions} \rightarrow \neg \textnormal{Killed}} \have[g] {7}{\textnormal{Unecessary}} \close \have[h] {8}{\textnormal{Ineffective }\lor\textnormal{ Unnecessary}} \end{nd}

  1. I will be killed in the air raid or I won’t.
  2. Suppose I will be killed.
  3. Then I will be killed even if I take precautions.
  4. Therefore, precautions are ineffective.
  5. Suppose I won’t be killed.
  6. Then I won’t be killed even if I don’t take precautions.
  7. Therefore, precautions are unnecessary.
  8. Therefore, precautions are either ineffective or unnecessary.

On the one hand, we feel that the conclusion does not follow. On the other, the argument seems valid. The main argument has the valid form of a constructive dilemma, and the first premise is logically true, so if there is a mistake, it must be in the sub-arguments. Dummett (1964, 346ff) argued that no conditional which allows the steps (30 c) and (30 f) is strong enough to allow the steps (30 d) and (30 g). Thus, he points to an equivocation of two senses of conditionals. According to Stalnaker, even if we accept Dummett’s solution, there are more questions to be answered. He argues that the main task is not to point to a mistake committed in the fatalism argument, but to show why anybody would make such a mistake. Had Dummett shown that there were these two senses of conditionals in ordinary language, that would have been a full solution. Stalnaker, however, does not believe that this could be done. Instead, he proposed a solution in terms of his notion of reasonable inference: the argument is invalid because the sub-arguments are invalid (in Stalnaker’s semantics for conditionals), since (30 c) and (30 f) are invalid steps. The force of the argument comes from the fact that the sub-arguments are reasonable. The whole argument, however, is not reasonable, since the reasonableness of sub-arguments does not ensure the reasonableness of the inference from (30 a) to (30 h).

I leave the discussion of Stalnaker’s reasonable inference for section 6. Here I will offer another solution. Let us first state the relevant facts from the standard conditional logic. Constructive dilemma is valid as an argument from hypotheses and invalid as an argument from assumptions (as we saw in section 1.2.2). Next, this version of verum ex quodlibet is not valid in standard conditional logic:

\[\begin{prooftree} \AxiomC{$C$} \UnaryInfC{$A \rightarrow C$} \end{prooftree}\]

(This was to be expected anyway once we have noticed that the deduction theorem for conditionals does not hold: see section 1.1.) We will need a name for this rule, so let us call it hypothesis ex quodlibet. On the other hand, the following rule is valid (call it assumption ex quodlibet):

Image based on the LaTeX code:\begin{prooftree} \AxiomC{$C$} \doubleLine \UnaryInfC{$A \rightarrow C$} \end{prooftree}

(After the assumption rules out all \(\neg C\)-worlds, the selection function for the conditional has nothing else to select but \(C\)-worlds.) For these reasons, the sub-arguments (30 b – 30 d) and (30 e – 30 g) are invalid as arguments from hypotheses, and valid as arguments from assumptions.

In my view, we have here once again a case of equivocation of assumptions with hypotheses. The steps (30 c) and (30 f) are only valid for the case of entailment from assumptions. If we assume that I will be killed, then we rule out of consideration any possibility that the opposite might happen; then it follows that I will be killed even if I take precautions. On the other hand, under the assumption that I will not be killed, it must be that it will be so, whatever I do or do not do. However, as we saw in section 1.2.2, constructive dilemma is not valid for arguments from assumptions. That is, although the sub-arguments are valid, the whole argument is not. The whole argument has a valid form as an argument from hypotheses, but then the sub-arguments are invalid. The hypothesis (30 b) (Killed) cannot rule out as impossible my survival. Even if it is true, (30 b) is not a sufficient condition in every context for the conditional (30 c). In general, the consequent (as a hypothesis) is not sufficient in every context for the truth of the conditional. In other words, the Thesis accords with the facts about conditional logic we pointed to, that the rule we might call premise ex quodlibet is valid for assumptions and invalid for hypotheses:

Image based on the LaTeX code:\begin{prooftree} \AxiomC{$C$} \doubleLine \UnaryInfC{$A \rightarrow C$} \end{prooftree}

\[\begin{prooftree} \AxiomC{$\{C\} \nvDash A \rightarrow C$} \end{prooftree}\]

Therefore, my view is that the alleged strength of the argument (30 a – 30 h) for fatalism comes from an equivocation. The sub-arguments might appear valid if understood as arguments from assumptions, and the whole argument looks valid when understood as an argument from hypotheses.

What exactly did I achieve or plan to achieve here? I did not prove that the steps (30 c) and (30 f), i.e. the sub-arguments, are invalid as arguments from hypotheses. I just stated the fact that they already are invalid in standard conditional logic. I also stated the fact that they are valid from assumptions. Then I tried to explain why I think that the theory has pragmatic and semantic means to justify these facts, and hence that it can explain away the paradox. My aim was not to prove or disprove fatalism; my position is not metaphysical, but logical. I argued that the fact that the argument for fatalism is poor, according to our presupposed logic, is to be justified in pragmatic terms, including the distinctions from the Thesis.

One more thing to do here is to compare the indicative and the counterfactual version. Just imagine that the conditionals in the sub-arguments (30 c) and (30 f) are not indicative but counterfactual. Some philosophers might point to what they see as a disanalogy between the two versions and see only one version as paradoxical. The problem may be stated this way. There is a disanalogy between the indicative and the counterfactual version. The indicative version might appear paradoxical, so there is a problem to solve. The counterfactual version does not appear paradoxical, it just appears invalid, so there is nothing to solve. I, however, have claimed to have “solved” both versions, in exactly the same way.

Where does the disanalogy come from? Apparently, it stems from the claim that at least one of the rules, i.e. hypothesis ex quodlibet or assumption ex quodlibet, is more compelling for indicative than for counterfactual conditionals. Suppose I will be killed. Does it follow that:

(30 c) I will be killed even if I take precautions?

Or, suppose that I was killed. Does it follow that:

(30 c-cf) I would have been killed even if I had taken precautions?

While the former might appear okay, the latter is clearly invalid. Or so the objection goes.

In assessing these two arguments, we first need to specify the nature of the supposition “Killed.” After all, perhaps we will easily agree that both arguments are invalid if the supposition is a hypothesis. Also, hypothesis ex quodlibet would make our conditional logic collapse into classical logic, i.e. we would end up with a horseshoe-theory for both counterfactual and indicative conditionals. So, the supposition should be regarded as an assumption. That is, our premise is not only that I will be (was) killed, but also that my survival is ruled out of consideration. Hence we may reformulate the objection as saying that the above indicative instance of assumption ex quodlibet is more compelling than the latter counterfactual instance. But why is that so? Or, better, is it so at all?

I do not think it is so. Let us first note that both indicative and counterfactual version of assumption ex quodlibet are valid in standard theories. Let us further note that our instance of that rule looks acceptable—both (30 c) and (30 c-cf) sound good, given that my survival is out of the question (i.e. given that “Killed” is not a hypothesis but an assumption). I do not see any relevant difference between the indicative and the counterfactual version. They pass or fail together. The fact (discussed at the end of section 3) that counterfactuals, unlike indicative conditionals, are convenient tools for canceling presuppositions is not relevant here. It is true that one may deny (30 c-cf) and claim:

Had I taken precautions, I might not have been killed after all!

This might be perfectly rational, but still it is irrelevant to our purpose. This claim cancels our premise (“Killed”). When assessing an argument, we want to know what follows from a premise while it still holds, not after it has been canceled. Thus I think that if one denies that (30 c-cf) follows from the assumption “Killed,” then one either understands the premise as a hypothesis or does not realize that the premise has been canceled, which in turn may happen only if one forgets that the premise is an assumption and not a hypothesis. So I believe that my solution to the indicative case, if it is any good, solves mutatis mutandis the counterfactual case.

5 Case 3: McGee’s Counterexample to Modus Ponens

McGee (1985) proposed a counterexample to modus ponens:

Opinion polls taken just before the 1980 election showed the Republican Ronald Reagan decisively ahead of the Democrat Jimmy Carter, with the other Republican in the race, John Anderson, a distant third. Those apprised of the poll results believed, with good reason:

M1. If a Republican wins the election, then if it’s not Reagan who wins, it will be Anderson.

M2. A Republican will win the election.

Yet they did not have reason to believe:

MC. If it’s not Reagan who wins, it will be Anderson.

(I have added the labels “M1,” “M2,” “MC.”) Given the background story, we believe M1 and M2, and we do not believe MC because we believe in the conditional with the contrary consequent: If it is not Reagan who wins, it will be Carter. What I see as the main problem, and the point where the strength of the counterexample lies, is the fact that M1 appears to be not only true but trivially so, even though it has a true antecedent and a false consequent.

In section 3 we talked about the smaller and bigger tasks involved in solving a paradox (finding the mistake and explaining why it is a mistake and why anybody should make it). Standard conditional logic offers the smaller part of a possible solution: this is not a counterexample to modus ponens because the long premise is not true. It has a true antecedent and a false consequent, so it cannot meet the truth conditions. Now for the main task—why does M1 appear to be trivially true?

Let us use the Thesis to consider three things—sentence M1 translated into symbols as a conditional and two kinds of arguments:

  1. \(\textrm{Republican}\rightarrow(\neg\textrm{Reagan}\rightarrow\textrm{Anderson})\)

  2. \(\{\textrm{Republican}\}\vDash\neg\textrm{Reagan}\rightarrow\textrm{Anderson}\)

  3. Image based on the LaTeX code:\begin{prooftree} \AxiomC{Republican} \doubleLine \UnaryInfC{$\neg\textrm{Reagan}\rightarrow\textrm{Anderson}$} \end{prooftree}

The Thesis requires the antecedent of a true conditional to be sufficient, in the given context, for the consequent. In (31) this is de facto not the case, since the antecedent is true and the consequent is not. This is a sense in which (31) is false, which in this case may be offered as a justification for the standard truth conditions for conditionals. Since the antecedent is not sufficient for the consequent in the given context, it cannot be sufficient in every context, so (32) is invalid. On the other hand, the proposition Republican, as an assumption, has the strength to rule out of consideration the Democrats and Carter. Once they have been ruled out, the conclusion of (33) is perfectly acceptable (given that a Republican has to win, then, of course, it has to be that if it is not one of the two, it is the other). We cannot maintain that Carter will win if Reagan does not, because our assumption made us forget about Carter. Therefore our reason to reject MC no longer exists. Thus (33) is valid. Again, the proposition Republican, as an antecedent, does not have the strength to rule out what opposes it; so, Carter is still in the game and, because of that, the antecedent is not sufficient in (31). My suggestion is that the way to explain away McGee’s paradox is to point to a confusion between antecedents and assumptions. M1, interpreted as (31), is false, and that is why we do not have a counterexample to modus ponens. The reason why M1 appears to be trivially true is because we understand it as (33).

This completes the solution I propose. I would like to add few more thoughts a) to avoid possible misunderstanding, b) to emphasize the need of introducing the notion of arguments from assumptions, and c) to say a few words about how disputes about basic rules of inference could be resolved (this will also help me to explain better my ambitions in this paper).

a) One might object to the claim that there are arguments from assumptions in ordinary language. Why would anybody suppose that a contingent proposition (such as Republican) is necessary? That sounds unreasonable. Even if we grant that a kind of necessity is involved, am I not confusing logical and epistemic necessity? I plead not guilty. When making an assumption (e.g. Republican) we are not making a logical or metaphysical supposition about the modal status of the claim. We do not suppose that, God forbid, the Republicans necessarily win. We temporarily choose some (logical or metaphysical) possibilities as relevant, and rule out others as irrelevant to our conversation. Relevant possibilities are those compatible with our assumption, which amounts to treating the assumption as if it were necessary. This is a phenomenon routinely explained in pragmatics (rather than an unreasonable claim that something contingent is necessary). Also, I am not confusing different kinds of necessity. True, I never explained the exact nature of the necessity involved. But given that different kinds of necessity may share some formal properties, in this paper I test the supposition that the formal properties from the table in section 1 hold for arguments from assumptions (as explained in the last paragraph of section 2).

b) In “Scorekeeping in a Language Game” Lewis (1979b) introduced his notion of accommodation into pragmatics. If participants in a conversation are cooperative (in the Gricean sense), they try to give a chance of truth to what they hear, interpreting it charitably using various accommodations of presuppositions, resolving vagueness, moving the border between relevant and irrelevant possibilities, etc. McGee’s long premise M1, as mentioned, appears to be not only true, but logically true. As such, it should be among the first candidates for accommodation and charitable reading. It cannot be simply dismissed as false. A good solution of a paradox (and, more generally, a logic of natural language) must find a right balance between being prescriptive and being descriptive. It seems to me that standard conditional logic (without Thesis and my distinctions) might be in trouble here. If interpreted as a conditional, M1 is false, and I do not see how standard logic might render it true without giving up some of its essential features. One way of interpreting M1 as true, without modifying the standard logic, might be to claim that the main and the embedded conditional use different selection functions.16 This means that there is a context switch in the middle of M1 that is guilty of the mistake. Still, if this is to be a good solution, it should offer a systematic explanation of how and why such switches of the selection function happen. This explanation should provide some kind of justification for the context switch—even if it is a mistake, it is still rational people who make it. The explanation should also account for the spontaneity of the switch in M1—since M1 appears to be logically true, there probably must be some rule-governed pragmatic reason for the switch.

Maybe all this can be done, maybe even in a way compatible with my solution. However, instead of proceeding along these lines, I prefer to use my distinctions because they are more generally applicable—they are not limited to cases with embedded conditionals, nor to cases with at least two conditionals occurring, nor do they necessarily involve a context switch. Moreover, I do not believe that every if-construction in ordinary language must at any cost be considered a conditional (it might well be an argument). Therefore, I prefer to explain that there are two possible interpretations, and to make the two senses of M1 clear, one in which M1 is to be rejected (as a conditional), and the other in which it is acceptable (as an argument from assumption). Then I propose that confusing the two senses is the mistake that creates the problem. Next I explain why the mistake was easy to make, which is also why the mistake is excusable. Still, an excusable mistake is a mistake, and should be corrected.17

c) Even though I try to introduce a new rule for translation of ordinary language into symbols, the position I defend in this paper is rather conservative and traditional. I talk in terms of sufficient reasons and I believe that there are “sacred” basic rules of inference, such as modus ponens and modus tollens, that are constitutive of the meaning of conditionals and cannot be questioned. In that regard, I have a long tradition on my side. That incurs the risk that I might overestimate the strength of my arguments. I try to keep that in mind when considering different theories, especially those which are radically different. McGee was the first to propose a semantics where modus ponens is invalid, but there are more attacks. There are new theories dealing with the interaction between conditionals and modals. Some of these build new semantics for indicative conditionals to accommodate certain conditional claims that are considered false by the standard theories. The victim of this approach may be modus ponens (Kolodny and MacFarlane 2010) or modus tollens (Yalcin 2012). How can we resolve the dispute between these new radical theories and the traditional approach?

Some reactions (especially the early ones) to McGee’s counterexample tried to find a mistake in his argumentation, attempting to show that he overlooked something or violated some principles that he presumably also accepts or should accept. However, it seems that neither he nor the others just mentioned ever made such a mistake. I do not believe that this dispute can be solved by finding a “mistake” that one side is making. A more useful approach would be first to admit that McGee as well as MacFarlane, Kolodny and Yalcin know very well what they are doing when they oppose standard opinions. They are not working on small details. They are offering a new general approach to conditionals. These approaches are to be compared in the same way as competing scientific theories are compared. They will be eventually accepted or rejected based on their overall success. That is certainly not a matter of finding a “mistake” in some trivial sense.

I believe that I have scored a point for the traditional side. This is because I believe that the distinctions I have defended are applicable to a large field, to many problems that have often been considered separately, problems for which many different unrelated solutions have been proposed. Also, my distinctions are applicable to counterfactuals as well, and some of the paradoxes, formulated originally in terms of indicative conditionals, have their analogous counterfactual versions. The new radical theories have yet to deal with them.18 (More about counterfactuals in the next section.)

6 Relation to Stalnaker’s Reasonable Inference

The first two cases above (direct argument and fatalism) were discussed in Stalnaker’s paper “Indicative Conditionals” (1975). My solution has a certain similarity to Stalnaker’s solution in terms of his notion of “reasonable inference.” In this section I will try to explain where the similarities and differences come from. Comparison to Stalnaker’s theory will, I believe, make my position clearer:

An inference from a sequence of assertions or suppositions (the premises) to an assertion or hypothetical assertion (the conclusion) is reasonable just in case, in every context in which the premises could appropriately be asserted or supposed, it is impossible for anyone to accept the premises without committing himself to the conclusion. (1999, 65)

There are several common words in this definition that are actually Stalnaker’s technical notions. We need to explain “context,” “appropriateness,” and “acceptance.”

By “context” Stalnaker means those features of context that determine what propositions are expressed by our sentences. The most important feature, he says, is common knowledge, or presumed common knowledge, common ground, or background information that one takes for granted only if one presupposes that other participants in the conversation take it for granted (cf. Stalnaker 1999, 67; 2002, 701). The formal device that represents the common ground is context set, a set of worlds not ruled out by the common ground. A proposition is said to be compatible with or entailed by a context, respectively, when it is true at some or all the worlds from the context set. Contexts can change during our conversation, even by the conversation itself. Any accepted assertion changes the context by becoming an additional presupposition of subsequent conversation. That is, accepted assertions express propositions that rule out of the old context set the worlds where they do not hold, and then these propositions hold throughout the new context set. The appropriateness condition states that one cannot appropriately assert a proposition in a context incompatible with it. Applied to conditionals, the condition leads to the rule that one can appropriately assert a conditional only if its antecedent is compatible with the context. A typical counterfactual has an antecedent presumed to be false, so the rule is meant for indicative conditionals only.

Stalnaker defines entailment in the usual way: “A set of propositions (premises) entails a proposition (the conclusion) just in case it is impossible for the premises to be true without the conclusion being true as well” (1999, 65). Using my terminology, this is the relation between the set of hypotheses and the conclusion. Reasonable inference, on the other hand, corresponds to my arguments from assumptions. The reason for this is that the premises, once asserted and accepted, change the context and hold throughout the resulting context, i.e. they are entailed by the new context. Thus negations of accepted premises become inappropriate; we may say that they are ruled out of consideration. Accordingly, the premises have the status of necessity (relative to the context set), the same status that all other presuppositions from the common ground have. The conclusion of a reasonable argument is then entailed by the context, and it inherits the special status from the accepted premises. Thus, reasonable arguments are about preservation of that special status, not about preservation of truth. Because of that the formal properties of reasonable inference match those of arguments from assumptions, and do not match those of arguments from hypotheses. From Stalnaker’s paper we learn that transitivity and contraposition are reasonable (1999, 73) and constructive dilemma is not (1999, 74f). We also learn that the direct argument is reasonable, and it is easy to see that the converse (from conditional to disjunction) is also reasonable (1999, 72f). Therefore, reasonable inference both ways does not amount to equivalence (Stalnaker rejects the \(\supset\)‑analysis).

necessitation inference both ways gives equivalence transitivity contraposition constructive dilemma
arguments from hypotheses \(\times\) \(\checkmark\) \(\times\) \(\times\) \(\checkmark\)
arguments from assumptions \(\checkmark\) \(\times\) \(\checkmark\) \(\checkmark\) \(\times\)
reasonable inference ? \(\times\) \(\checkmark\) \(\checkmark\) \(\times\)

This is the same table from the end of section 1, with one additional row for reasonable inference. The only difference between the last two rows is in the case of necessitation. I put the question mark because both answers are possible, depending on the meaning of the box, i.e. the modal operator. If the box stands for logical necessity, then necessitation is not reasonable. If the box stands for the epistemic necessity of the same kind that a premise gains by being accepted and becoming part of the common ground, then necessitation is reasonable.

This relation between entailment and arguments from hypotheses on the one side, and reasonable inference and arguments from assumptions on the other, makes Stalnaker’s and my solutions to cases 1 and 2 similar. The direct argument is invalid but its strength comes from its being reasonable according to Stalnaker’s explanation, while I called it invalid as an argument from hypothesis and explained its alleged strength by pointing to the validity of the corresponding argument from assumption. The fatalism argument has the valid form and invalid sub-arguments, and unreasonable form and reasonable sub-arguments, again analogous to the solution I defended in section 4. Why, then, do I look for new distinctions?

I believe that my distinctions point to a more basic phenomenon and are applicable to more kinds of cases. Solutions in terms of my distinctions match those of Stalnaker’s solutions in terms of reasonable inference, but my distinctions apply more broadly, because they are not limited by the appropriateness condition. First, a typical counterfactual has an antecedent presumed to be false, which makes the conditional inappropriate, so the notion of reasonable inference is not meant for this class of conditionals. Second, the notion of reasonable inference cannot be applied to arguments involving indicative conditionals that do not meet the appropriateness condition. For that reason, Stalnaker’s notion cannot be used to resolve McGee’s case. Reagan’s winning may well be a part of the common ground and hold throughout the context set. Reagan’s not winning occurs twice in McGee’s counterexample, so neither the premises nor the conclusion meets the appropriateness condition.19

Consider the McGee case again. Sometime after the elections we could imagine such a conversation:

A: Had a Republican won, then, had it not been Reagan, it would have been Anderson.

B: Yes, but a Republican did win (you missed the news).

A: So, had Reagan not won, Anderson would have.

Consider also the fatalism case (30 a)–(30 h) again. It pertains to some period and some person. Suppose that a few years later we are presented with this argument, which also pertains to that same person and same period:

Image based on the LaTeX code:\begin{nd} \hypo[a] {1}{\textnormal{Killed}\lor \neg\textnormal{Killed}} \open \hypo[b] {2}{\textnormal{Killed}} \have[c] {3}{\textnormal{Precautions} \rightarrow \textnormal{Killed}} \have[d] {4}{\textnormal{Ineffective}} \close \open \hypo[e] {5}{\neg \textnormal{Killed}} \have[f] {6}{\neg \textnormal{Precautions} \rightarrow \neg \textnormal{Killed}} \have[g] {7}{\textnormal{Unecessary}} \close \have[h] {8}{\textnormal{Ineffective }\lor\textnormal{ Unnecessary}} \end{nd}

  1. He was killed in the air raid or he was not.
  2. Suppose he was killed.
  3. Then it would have been so even if he had taken precautions.
  4. Therefore, precautions are ineffective.
  5. Suppose he was not killed.
  6. Then it would have been so even if he had not taken precautions.
  7. Therefore, precautions are unnecessary.
  8. Therefore, precautions are either ineffective or unnecessary

It is difficult to argue that these examples talk about something different than the original examples, and that these counterfactuals say something different from what was said by the analogous indicative conditionals.20 Thus, these examples present the same puzzles as the original versions already discussed in previous sections. My solutions to them would be exactly analogous to the solutions I proposed for the indicative versions. For these reasons, I believe that the distinction between antecedents, hypotheses and assumptions is more broadly applicable than Stalnaker’s notion of reasonable inference.

This is not a critique of Stalnaker’s theory, but a comparison that helps me emphasize and clarify my points. There is no conflict between our solutions—they go along within the appropriateness limit (as in DA and the original fatalism case), and the reason for that match has been explained in this section. In addition, my distinctions apply to some cases involving inappropriate indicative conditionals (which may occur in McGee-style counterexamples) and to some cases involving counterfactuals (like the two past-tense versions of McGee’s counterexample and the fatalism argument).

There is another more subtle difference between Stalnaker’s solution and mine, and that is a difference in emphasis, stress, or, let us say, accent. It comes from the choice of terminology. There is a positive component of the meaning of the word “reasonable.” It suggests something laudatory or commendable. Within the expression “invalid but reasonable” it suggests something justifiable or forgivable. Within my terminology, what is justifiable or forgivable is never the use of an invalid argument. Invalidity is a mistake, and is therefore bad. Justification is to be looked for elsewhere. In Stalnaker’s case, an argument, for example DAh, can be invalid and reasonable. In my case, it is not the same argument that is good in one sense and bad in another, but two different arguments: one good and the other bad (for example, DAa and DAh). So, I do not need to say that there is something justifiable in using invalid arguments, i.e. in the mistake itself. We both look for an excusing factor that would explain why the mistake was easy to make (in Stalnaker’s case, because the invalid argument may be reasonable; in my case, because assumptions, hypotheses, and antecedents may be hard to distinguish in ordinary language). Therefore, whereas for Stalnaker it is the argument stated in the formal language that can be bad and excusable (e.g. DAh), in my case what may be bad and excusable is never an argument expressed in symbols, but the translation of ordinary language if-constructions into symbols.


  1. Here is a citation from a randomly chosen text that mentions Hilbert axiomatization: “The sole rule of a standard Hilbert axiomatics is modus ponens, from \(\vdash A\) and \(\vdash A \supset B\) to \(\vdash B\)(Urbas 1996, 443).↩︎

  2. Such semantics is usually called “Stalnaker-Lewis” or “standard,” since it shares the main elements of the theories presented in Stalnaker (1968) and Lewis (1973, 1979a).↩︎

  3. When brackets are omitted, a formula is an implication or equivalence rather than a conjunction or disjunction. So “\(A \land B \rightarrow C\)” means “(\(A \land B) \rightarrow C\).”

    Exportation is considered invalid because adding it to standard conditional logic causes a collapse into classical logic, i.e. that would make the arrow the same as the horseshoe. A proof can be seen in McGee (1985, 465–66). See also his footnote 7 where he relates this proof to the failure of the deduction theorem. Gibbard (1981, 234 and further) proved similar results in a different way. Unlike McGee, Gibbard did not go on to deny the validity of modus ponens.↩︎

  4. Here is some evidence, from randomly chosen academic literature, that “if … then …” is used for both conditionals and arguments. It is enough to show examples of arguments stated in terms of “if … then …”. “Modus ponens says that if P is true, and if P implies Q, then Q must be true” (Dretske 2013, 28). “Existential generalization says that if we have found a particular object satisfying some property, then we can assert that there exists an object satisfying that property” (Wolf 2005, 20). “[…] [M]odus ponens says that if you know that p is true, and you also know that whenever p is true q is true, then you can give birth to the new baby truth, q” (Fishman 2002, 8). “But modus tollens is a rule of logic, too. And modus tollens says that if a logically correct argument leads to a false conclusion, then by God (or by Goddess!) something is wrong with the premises” (Koertge 2010, 7). I am not interested if all the details are correct in these citations, but only in the fact that they express arguments in terms of an “if …” form. Inferring from these that, for example, Dretske believed that modus ponens was a conditional would not be a charitable reading.↩︎

  5. Remember, the outline we agreed to presuppose is only a skeleton, not a particular conditional logic. Cf. Djordjevic (2012) about the important differences between various semantics that fit the outline.↩︎

  6. Similar examples, and the term “epistemic conditionals,” were first discussed by Warmbrōd (1981, 1983) and Gibbard (1981).↩︎

  7. These are not far-fetched possibilities. For such reasons Djordjevic (2013) rejects a class of some of the most popular semantics, including Lewis’s.↩︎

  8. Cf. for example Stalnaker (2002, 701), Lewis (1979b; 233 in the 1983 reprint).↩︎

  9. For example Lewis (1979b; 246–247 in the 1983 reprint).↩︎

  10. In Stalnaker’s sense, cf. (1975, 2002).↩︎

  11. For more details and subtle distinctions about \(S5\) necessities see for example Hale (2012).↩︎

  12. All except necessitation, which might be a bit more complicated. I will comment on it in section 6.↩︎

  13. Cf. Part I of Grice (1989), especially chapter 4 “Indicative Conditionals.”↩︎

  14. In the above citation, and also in (Stalnaker 1999, 74): “[It] is not enough to say that step x is invalid and leave it at that, even if that claim is correct. One must explain why anyone should have thought that it was valid.”↩︎

  15. Counterfactual DAa is presumably more compelling than counterfactual DAh. But counterfactual DAa is rarely considered.↩︎

  16. Based on a conversation with Stalnaker on a similar example, I believe that his solution of the McGee problem would go along these lines.↩︎

  17. The last two paragraphs under b) were supposed to provide an extra reason for the importance of using the notion of argument from assumptions. Another reason might be found in the literature. Leitgeb (2011) offers a solution to a problem in belief revision (discovered by Chalmers and Hájek 2007) in terms of a distinction that, it seems to me, pretty much resembles mine between hypotheses and assumptions.↩︎

  18. Furthermore, my solutions and distinctions are compatible with the traditional approach, and are not compatible with these new theories. This is because it is essential for my approach to keep a clear difference between antecedents and assumptions, and keep the former much weaker than the latter. Antecedents may do lots of things, change context, trigger or cancel presuppositions, introduce new possibilities etc., but they cannot rule out the possibility of what opposes them, as assumptions do. New semantics see antecedents much the same as I see assumptions. But I need another paper to discuss that properly.↩︎

  19. There is a possibility that common ground includes Reagan’s winning, and it is not a far-fetched one. This is important for my argumentation, and I will try to show it in more detail. We can modify McGee’s example by adding some more information. Let the opinion poll results be \(69\%\), \(30\%\), \(1\%\) for Reagan, Carter and Anderson, respectively. Imagine a conversation where participants believe that the margin of error is \(\pm 3\%\), which they understand as meaning that the actual voting results cannot differ from the opinion poll results more than \(3\%\). Through several meetings and conversations on similar topics, this belief became part of the common ground for the group. Reagan’s winning is entailed by their common ground, so it is part of it.

    Another example. I think we will easily agree that there once were or still are conversations where part of the common ground is that Reagan won the 1980 elections. Now consider a past tense version of McGee’s example:

    If a Republican won the election, then if it was not Reagan, it was Anderson.
    A Republican won.
    So, if it was not Reagan who won, it was Anderson.

    Here the appropriateness condition would not be met, but the example would pose the same problem as the original version. This version may not usually be properly assertable, but semantics must be able to evaluate it anyway. For example, this might not be what the participants in the conversation are saying to each other, but it could be that they are merely estimating something said or written by another person.↩︎

  20. Similar examples were made by Strawson (1986), from the (1997) reprint, p. 163:

    (1) Remark made in the summer of 1964: “If Goldwater is elected, then the liberals will be dismayed.”—(2) Remark made in the winter of 1964: “If Goldwater had been elected, then the liberals would have been dismayed.” It seems obvious that about the least attractive thing that one could say about the difference between these two remarks is that it shows that … the expression “if … then …” has a different meaning in one remark from the meaning which it has in the other.

    ↩︎

References

Chalmers, David J., and Alan Hájek. 2007. “Ramsey + Moore = God.” Analysis 67 (2): 170–72. doi:10.1093/analys/67.2.170.
Djordjevic, Vladan. 2012. “Goodman’s Only World.” In Between Logic and Reality. Modeling Inference, Action and Understanding, edited by Majda Trobok, Nenad Miŝĉević, and Berislav Žarnić, 269–80. Logic, Epistemology, and the Unity of Science 25. Dordrecht: Springer Science+Business Media B.V.
———. 2013. “Similarity and Cotenability.” Synthese 190 (4): 681–91. doi:10.1007/s11229-012-0198-4.
Dretske, Fred I. 2013. “The Case Against Closure.” In Contemporary Debates in Epistemology, edited by Matthias Steup, John Turri, and Ernest Sosa, 2nd ed., 27–40. Contemporary Debates in Philosophy. Malden, Massachusetts: Wiley Blackwell.
Dummett, Michael A. E. 1964. “Bringing about the Past.” The Philosophical Review 73 (3): 338–59. doi:10.2307/2183661.
Fishman, Mark B. 2002. “Teaching AI Epistemology to Humans.” In Proceedings of the Thirty-Fourth Annual Meeting of the Florida Section of the Mathematical Association of America, edited by David Kerr and Bill Rush. Florida Gulf Coast University: Florida Section of the MAA.
Gabbay, Dov M. 1972. “A General Theory of the Conditional in Terms of a Ternary Operator.” Theoria 38 (3): 97–104. doi:10.1111/j.1755-2567.1972.tb00927.x.
Gibbard, Allan F. 1981. “Two Recent Theories of Conditionals.” In Ifs: Conditionals, Belief, Decision, Chance, and Time, edited by William L. Harper, Robert C. Stalnaker, and Glenn Pearce, 211–47. The University of Western Ontario Series in Philosophy of Science 15. Dordrecht: D. Reidel Publishing Co.
Goodman, Nelson. 1947. “The Problem of Counterfactual Conditionals.” The Journal of Philosophy 44 (5): 113–28. doi:10.2307/2019988.
———. 1983. Fact, Fiction and Forecast. 4th ed. Cambridge, Massachusetts: Harvard University Press.
Grice, H. Paul. 1989. Studies in the Way of Words. Cambridge, Massachusetts: Harvard University Press.
Hale, Bob. 2012. “What Is Absolute Necessity?” Philosophia Scientiae. Travaux d’histoire Et de Philosophie Des Sciences 16 (2): 117–48. doi:10.4000/philosophiascientiae.743.
Hilbert, David R., and Wilhelm Ackermann. 1938. Grundzüge der theoretischen Logik. 2nd ed. Berlin: Springer Verlag.
———. 1950. Principles of Mathematical Logic. New York: Chelsea Publishing Company. Translation of Hilbert and Ackermann (1938) by Lewis M. Hammond, George G. Leckie and F. Steinhardt, with revisions, corrections and added notes by Robert E. Luce.
Koertge, Noretta. 2010. “The Feminist Critique [Repudiation] of Logic.” Unpublished manuscript, available at https://philpapers.org/archive/KOETFC.pdf.
Kolodny, Niko, and John MacFarlane. 2010. “Ifs and Oughts.” The Journal of Philosophy 107 (3): 115–43. doi:10.5840/jphil2010107310.
Leitgeb, Hannes. 2011. “God - Moore = Ramsey (A Reply to Chalmers and Hájek).” Topoi 30 (1): 47–51. doi:10.1007/s11245-010-9088-x.
Lewis, David. 1973. Counterfactuals. Cambridge, Massachusetts: Harvard University Press.
———. 1979a. “Counterfactual Dependence and Time’s Arrow.” Noûs 13 (4): 455–76. doi:10.2307/2215339.
———. 1979b. “Scorekeeping in a Language Game.” The Journal of Philosophical Logic 8 (3): 339–59.
———. 1983. Philosophical Papers. Vol. 1. Oxford: Oxford University Press.
McGee, Vann. 1985. “A Counterexample to Modus Ponens.” The Journal of Philosophy 82 (9): 462–71. doi:10.2307/2026276.
Stalnaker, Robert C. 1968. “A Theory of Conditionals.” In Studies in Logical Theory, edited by Nicholas Rescher, 98–112. American Philosophical Quarterly Monograph Series 2. Oxford: Basil Blackwell Publishers.
———. 1975. “Indicative Conditionals.” Philosophia: Philosophical Quarterly of Israel 5: 269–86. doi:10.1007/bf02379021. Reprinted in Stalnaker (1999, 63–77).
———. 1999. Context and Content: Essays on Intensionality, Speech and Thought. Oxford: Oxford University Press.
———. 2002. “Common Ground.” Linguistics and Philosophy 25 (4–5): 701–21. doi:10.1023/a:1020867916902.
Strawson, Peter Frederick. 1986. ‘If’ and \(\supset\).” In Philosophical Grounds of Rationality: Intentions, Categories, Ends, edited by Richard E. Grandy and Richard Warner, 228–42. Oxford: Oxford University Press. Reprinted in Strawson (1997, 162–78).
———, ed. 1997. Entity and Identity: And Other Essays. Oxford: Oxford University Press.
Urbas, Igor. 1996. “Dual-Intuitionistic Logic.” Notre Dame Journal of Formal Logic 37 (3): 440–51. doi:10.1305/ndjfl/1039886520.
Warmbrōd, Ken. 1981. “An Indexical Theory of Conditionals.” Dialogue. Revue Canadienne de Philosophie / Canadian Philosophical Review 20 (4): 644–64. doi:10.1017/s0012217300021399.
———. 1983. “Epistemic Conditionals.” Pacific Philosophical Quarterly 64 (1): 249–65. doi:10.1111/j.1468-0114.1983.tb00198.x.
Wolf, Robert S. 2005. A Tour Through Mathematical Logic. The Carus Mathematical Monographs 30. Washington, District of Columbia: Mathematical Association of America.
Yalcin, Seth. 2012. “A Counterexample to Modus Tollens.” The Journal of Philosophical Logic 41 (6): 1001–24. doi:10.1007/s10992-012-9228-4.