Consistency, Obligations, and Accuracy-Dominance Vindications

Marc-Kevin Daoust

Vindicating the claim that agents ought to be consistent has proved to be a difficult task. Recently, some have argued that we can use accuracy-dominance arguments to vindicate the normativity of such requirements. But what do these arguments prove, exactly? In this paper, I argue that we can make a distinction between two theses on the normativity of consistency: the view that one ought to be consistent and the view that one ought to avoid being inconsistent. I argue that accuracy-dominance arguments for consistency support the latter view, but not necessarily the former. I also argue that the distinction between these two theses matters in the debate on the normativity of epistemic rationality. Specifically, the distinction suggests that there are interesting alternatives to vindicating the strong claim that one ought to be consistent.

The normativity of the following formal coherence requirements is contentious:

Belief Consistency. If A believes that \(p\), it is false that A believes that \(\neg p\).1

Credal Consistency. If A has a credence of X in \(p\), then A has a credence of (1-X) in \(\neg p\).

Do we fall under an obligation to satisfy these requirements?2 Many philosophers like John Broome (2013, ch. 13) are convinced that the above requirements are normative, but cannot find a satisfactory argument in favour of such a conclusion. Other philosophers are less optimistic. For instance, Niko Kolodny (2005; 2007b; 2007a, 230–31) has argued that there is no reason to be consistent. According to him, what matters from an epistemic point of view is acquiring true beliefs (or acquiring beliefs that are likely to be true on the evidence) and avoiding false beliefs (or avoiding beliefs that are likely to be false on the evidence). However, a perfectly consistent system of beliefs (or credences) can be entirely false, inaccurate or improbable on the evidence. So, consistency requirements are not normative, in the sense that one does not necessarily have a reason to be consistent.

Recently, a new strategy has emerged to vindicate the normativity of Consistency. This strategy relies on accuracy-dominance principles, which roughly say that if state \(Y\) is better than state \(X\) at every possible world, one ought to avoid state \(X\). However, there is a weak and a strong interpretation of what is entailed by the accuracy-dominance arguments. According to the strong interpretation, accuracy-dominance arguments entail that one ought to be consistent. Joyce, for instance, argues that:

It is thus established that degrees of belief that violate the laws of probability are invariably less accurate than they could be. Given that an epistemically rational agent will always strive to hold partial beliefs that are as accurate as possible, this vindicates the fundamental dogma of probabilism [according to which degrees of belief must make conformity to the axioms of probability]. (1998, 600)

According to the weak interpretation, accuracy-dominance arguments merely entail that ought not to be inconsistent. Easwaran, for instance, says that “we can use dominance to eliminate” the inconsistent doxastic options (2016, 826, emphasis added). In other words, dominance is here used to argue against inconsistency. Thus, we can make the following distinction between two views:

Normativity\(+\). Given the accuracy-dominance arguments, A ought to be consistent.

Normativity\(-\). Given the accuracy-dominance arguments, A ought not to be inconsistent.

This paper argues that, while accuracy-dominance arguments can vindicate Normativity\(-\), they do not necessarily vindicate Normativity\(+\). Specifically, accuracy-dominance arguments vindicate Normativity\(+\) when supplemented with a contentious hypothesis concerning the relationship between reasons for and reasons against. Hence, accuracy-dominance arguments do not vindicate Normativity\(+\) on their own.

In Section 1, I clarify the debate on the normativity of Consistency. In Sections 2 and 3, I present two important arguments in the debate surrounding the normativity of Consistency: accuracy-dominance arguments and Kolodny’s objection from truth-conduciveness. Both arguments are veritistic: They assume that only true beliefs bear final epistemic value, and only false beliefs bear final epistemic disvalue. I argue that, under the assumption that veritism is true, the only way to make sense of both arguments is to make a distinction between Normativity\(+\) and Normativity\(-\) (i.e. to deny that both views are coextensive). Then, I argue that accuracy-dominance arguments fail to vindicate Normativity\(+\).

This is not necessarily bad news. In conclusion, I explain why this might be an occasion to adjust our expectations in the debate on the normativity of formal coherence requirements. Many people think that there is something bad or suboptimal with inconsistent combinations of attitudes. The mistake might have been to try to explain this assumption in terms of an obligation to be consistent. Being in a position to vindicate Normativity\(-\) while remaining neutral on Normativity\(+\) could be advantageous in the debate on the normativity of formal coherence requirements.

1 The “Why-Be-Consistent?” Challenges

There are many putative explanations of why one ought to have some consistent combinations of beliefs. They stem from the normative authority of truth, knowledge or reasons, as in the following:

Truth Vindication. One ought to believe \(p\) if and only if \(p\). Truth is consistent (or: Inconsistent propositions cannot be true simultaneously). So, one ought to have some consistent combinations of beliefs (e.g. the true ones).

Knowledge Vindication. One is epistemically permitted to believe \(p\) if and only if one is in a position to know that \(p\). Knowledge is consistent (or: Propositions that one is in a position to know cannot be inconsistent with each other). So, one is only epistemically permitted to believe consistent combinations of beliefs.

Reasons Vindication. One is epistemically permitted to believe \(p\) if and only if one has sufficient epistemic reason to believe \(p\). One never has sufficient epistemic reason to believe \(p\) and sufficient epistemic reason to disbelieve \(p\) simultaneously. So, one is only epistemically permitted to believe consistent combinations of beliefs.3

Philosophers like Broome (2013) and others are worried that the above putative vindications do not fully vindicate the normativity of Consistency. Some consistent combinations of beliefs may include some false, unjustified or unreasonable beliefs. Even if consistent agents sometimes believe propositions that are false, unjustified or unreasonable, it seems that they satisfy a distinct obligation to have consistent beliefs (e.g. an obligation that does not boil down to truth, knowledge or reasons). In other words, perhaps the agent is unjustified, mistaken or unreasonable, but one could still say: At least he or she is consistent. Here, the putative obligation to be consistent will not come from truth, knowledge or reasons.4

So, according to some philosophers, the above vindications are somehow incomplete. Perhaps we can easily argue that agents ought to have some consistent combinations of beliefs, but finding a vindication of Consistency that covers all the possible consistent combinations of beliefs has proved to be a difficult task.

It should also be noted that the normativity of Consistency is part of a broader debate on the normativity of structural rationality. Structural rationality allegedly requires of agents not to be incoherent—for example, not to be akratic, not to have intransitive preferences, and so forth (Worsnip 2018a, 2018b). So, in addition to Consistency, there are other putative structural requirements of rationality, like:

Inter-Level Coherence. Rationality requires that, if A believes that he or she has sufficient epistemic reason to believe \(p\), then A believes that \(p\).5

Instrumental Principle. Rationality requires that, if A intends to \(\phi\), and A believes that \(\psi\)-ing is a necessary means to \(\phi\)-ing, then A intends to \(\psi\).6

Broome and others have tried to find compelling arguments for the claim that structural rationality has normative authority. However, structural rationality is neutral on whether one’s beliefs should be true, reasonable or amount to knowledge. Some entirely false and unreasonable belief systems can satisfy the requirements of structural rationality. So, at least given the agenda of these philosophers, a good vindication of the normativity of Consistency should cover the cases in which one’s beliefs are false or unreasonable.

An interesting feature of accuracy-dominance arguments is that they remain neutral on whether one’s beliefs should be true, reasonable or amount to knowledge. They focus on what is wrong with having some combinations of beliefs, regardless of the substantive properties of such beliefs.

2 Accuracy-Dominance and Consistency

Accuracy-dominance arguments for vindicating the normativity of Consistency come from decision theory and rely on the following principle:

Strong Dominance. If an available state \(X\) is strongly dominated by an available state \(Y\) at every possible world, in the sense that state \(Y\) is better or has more value than state \(X\) at every possible world, one ought to avoid state \(X\).

Strong Dominance has been used to vindicate probabilism, the view roughly stating that an agent’s rational credences should satisfy the probability axioms. With respect to some inaccuracy measures such as the Brier score, probabilistically inconsistent agents have access to a credence function that is less inaccurate (and thus less epistemically disvaluable) at every possible world (Joyce 1998; Leitgeb and Pettigrew 2010; Pettigrew 2016a).

For the sake of simplicity, I will leave aside dominance for credence and focus on dominance for belief (these arguments have the same structure, but dominance arguments for belief are more accessible).

There is a plausible explanation of why inconsistent combinations of beliefs are strongly dominated. An agent can take different doxastic attitudes towards \(p\), as in the following:

  1. Believing \(p\) and not disbelieving \(p\),

  2. Disbelieving \(p\) and not believing \(p\),

  3. Neither believing nor disbelieving \(p\),

  4. Believing \(p\) and disbelieving \(p\).

The question is whether (iv) is strongly dominated. To answer this question, we need to determine the epistemic value of (iv) at every possible world. In veritistic frameworks, only true beliefs have final epistemic value and only false beliefs have final epistemic disvalue. Accordingly, \(T\) is the epistemic value of having a true belief (for \(T > 0\)), F is the epistemic disvalue of having a false belief (for \(F < 0\)), and the epistemic value of not believing \(p\) (or not disbelieving \(p\)) is 0.7 Finally, assume that \(T \leq -F\), which amounts to endorsing a conservative account of epistemic value. The conservative constraint on epistemic value is plausible.8 As Dorst says:9

[An epistemically rational agent] will be doxastically conservative… Why? Well here’s a fair coin—does she believe it’ll land heads? Or tails? Or both? Or neither? Clearly neither. But if she cared more about seeking truth than avoiding error, why not believe both? She’d then be guaranteed to get one truth and one falsehood, and so be more accurate than if she believed neither… Upshot: we impose a Conservativeness constraint to capture the sense in which Rachael has ‘more to lose’ in forming a belief than she does to gain. (2019, 11)

Then, we can determine the possible values of each option at every possible world. Since the value of these options is solely determined by \(p\)’s truth value, we need to consider the worlds in which \(p\) is true and the worlds in which \(p\) is false, as in Table 1.

Table 1: An agent’s doxastic options with respect to \(p\)
Doxastic options / possible world \(p\) is true \(p\) is false
Believing \(p\) and not disbelieving \(p\) \(T\) \(F\)
Disbelieving \(p\) and not believing \(p\) \(F\) \(T\)
Neither believing nor disbelieving \(p\) \(0\) \(0\)
Believing \(p\) and disbelieving \(p\) \(T+F\) \(T+F\)

Finally, in accordance with Table 1, we can conclude that inconsistent combinations of beliefs are strongly dominated. The following reasoning supports such a conclusion:

(1) \(T \leq -F\) (conservative assumption). Accordingly, \(T+F < 0\).

(2) Following (1) and Table 1, believing \(p\) and disbelieving \(p\) simultaneously has an epistemic value of less than 0 at every possible world.

(3) However, following Table 1, neither believing nor disbelieving \(p\) has an epistemic value of 0 at every possible world.

(C) Therefore, following (2) and (3), inconsistent combinations of beliefs such as believing \(p\) and disbelieving \(p\) are strongly dominated: another available option (neither believing nor disbelieving \(p\)) is more valuable at every possible world.10

Hence, one ought to avoid being inconsistent.

3 Truth-Conduciveness, Reasons For and Reasons Against

3.1 Kolodny’s Objection From Truth-Conduciveness

The above argument states that inconsistent combinations of beliefs are dominated, which means that one ought not to be inconsistent. Naturally, this seems to suggest that one ought to be consistent. But this equivalence is less obvious than it seems.

To see why, consider Kolodny’s argument against the normativity of Consistency. According to him, one does not necessarily have an epistemic reason to be consistent. Rather, what matters from an epistemic point of view is having true beliefs and avoiding false beliefs, and satisfying Consistency does not guarantee a better ratio of true to false beliefs. In fact, some perfectly consistent sets of beliefs are entirely false (or improbable on the evidence). Kolodny summarizes his argument in the following way:

From the standpoint of theoretical deliberation—which asks ‘What ought I to believe?’—what ultimately matters is simply what is likely to be true, given what there is to go on. […] [But] formal coherence may as soon lead one away from, as toward, the true and the good. Thus, if someone asks from the deliberative standpoint ‘What is there to be said for making my attitudes formally coherent as such?’ there seems, on reflection, no satisfactory answer. (2007a, 231)

In other words, if one merely satisfies Consistency, one is not more likely to end up forming true beliefs and avoiding false beliefs. So, the mere satisfaction of Consistency does not improve one’s ratio of true to false beliefs. In view of the foregoing, Kolodny thinks that it is false that one falls under an obligation to be consistent.11

3.2 Comparing the Objection from Truth-Conduciveness and Accuracy-Dominance Arguments

Kolodny argues that there is no reason to be consistent. His argument relies on the fact that being consistent does not guarantee a good ratio of true to false beliefs. By way of contrast, accuracy-dominance arguments suggest that there is good reason not to be inconsistent. If one is inconsistent, one is strongly dominated, in the sense that one has access to a better option at every possible world. For instance, if one believes \(p\) and disbelieves \(p\) simultaneously, one will necessarily improve one’s situation by neither believing nor disbelieving \(p\).

Accuracy-dominance arguments and Kolodny’s objection from truth-conduciveness are both veritistic.12 Indeed, they presuppose that only true beliefs bear final epistemic value, and only false beliefs bear final epistemic disvalue. Nevertheless, such arguments apparently support incompatible conclusions concerning the normativity of Consistency: Kolodny argues that veritism entails the denial of the normativity of Consistency, whereas accuracy-dominance arguments support the normativity of Consistency. This is puzzling.

Perhaps Kolodny and accuracy-dominance theorists do not endorse the same version of veritism. Veritism says that only true beliefs have final epistemic value, and only false beliefs have final epistemic disvalue. However, when it comes to epistemic obligations and permissions, these assumptions concerning epistemic value might translate in many different ways. For instance, perhaps agents ought to maximize their total epistemic score (e.g. the total balance of epistemic value they get from their doxastic states), or perhaps agents ought to maximize their expected epistemic score. For clarity, consider the following example: Suppose \(p\) is very likely relative to a body of evidence E. But as it happens, \(p\) is false. Then, believing \(p\) (or having a high credence in \(p\)) might maximize expected epistemic value with respect to E. But disbelieving \(p\) (or having a low credence in \(p\)) will maximize epistemic value tout court.

Yet, it is implausible that a difference in how Joyce and Kolodny understand veritism is the reason why they disagree. Kolodny’s argument can be reformulated in many different ways. Consider the following possibilities: (i) Suppose agents ought to maximize expected accuracy. Then, Kolodny could say: Some consistent combinations of beliefs can minimize expected accuracy (believing the most improbable propositions can be consistent). (ii) Suppose agents ought to optimize their ratio of true to false beliefs. Then, Kolodny could argue that some agents with a very bad ratio of true to false beliefs are consistent. (iii) Suppose agents ought to maximize total accuracy. Then, Kolodny could say: Some consistent combinations of beliefs can minimize accuracy (believing false propositions only can be consistent). As we can see, Kolodny’s objection is malleable.13

Another possibility is that Kolodny and accuracy-first theorists have a different understanding of what “ought” means. We can make a distinction between normativity in the rule-following sense (as in: Relative to domain D, A ought to X) and normativity in the reason-involving sense (as in: A has a reason to X).14 For example, the rules of etiquette require of agents to be polite, but agents might lack a reason to be polite. By way of analogy with the rules of etiquette, perhaps accuracy-first theorists are merely interested in arguing that the rules of rationality require consistency. This would be compatible with Kolodny’s view—namely, that agents do not have a reason to be consistent. Both views would then be compatible with each other.

It is true that accuracy-first theorists see Consistency as a demand of rationality. However, it is implausible that accuracy-first theorists are merely concerned with normativity in the rule-following sense. Accuracy-first theorists like Joyce tie norms of rationality to epistemic value, as in the following:

The Norm of Truth. An epistemically rational agent must strive to hold a system of full beliefs that strikes the best attainable overall balance between the epistemic good of fully believing truths and the epistemic evil of fully believing falsehoods (1998, 577).

The Norm of Gradational Accuracy. An epistemically rational agent must evaluate partial beliefs on the basis of their gradational accuracy, and she must strive to hold a system of partial beliefs that, in her best judgment, is likely to have an overall level of gradational accuracy at least as high as that of any alternative system she might adopt (1998, 579).

Satisfying the requirements of rationality is different from, say, satisfying the requirements of etiquette. The former has a privileged relationship to value. Epistemically rational agents want to optimize their overall balance of epistemic value. Accordingly, it would be surprising that Joyce and others are merely concerned with normativity in the rule-following sense. Specifically, it would be surprising that, while rationality has some sort of privileged relationship to value, it is merely normative in the rule-following sense.15

Under the assumption that Kolodny and accuracy-dominance theorists agree upon a specific version of veritism and the meaning of “ought,” the natural reaction is to think that at least one of the above arguments is mistaken—either the objection from truth-conduciveness is inconclusive, or accuracy-dominance arguments fail. After all, how can there be no reason to be consistent and reasons against being inconsistent? If there is something wrong with being inconsistent, there must be something good with being consistent!

However, this natural reaction presupposes that there is always a connection between (i) reasons for being consistent (as in Normativity\(+\)) and (ii) reasons against being inconsistent (as in Normativity\(-\)). Call this the Coextensivity Thesis, as in the following:

Coextensivity Thesis. Arguments in favour of Normativity\(-\) count as arguments in favour of Normativity\(+\) (and vice versa).

Those who endorse the Coextensivity Thesis think that (i) and (ii) express the same normative relation.

If the Coextensivity Thesis were correct, then Kolodny’s objection from truth-conduciveness would be inconclusive. Under the assumption that the Coextensivity Thesis is correct, two kinds of considerations can vindicate the view that one ought to be consistent—namely, reasons be consistent and reasons against being inconsistent. Kolodny argues for the absence of reasons in favour of being consistent. But if the Coextensivity Thesis is correct, such considerations are just half of the story. We also need to consider whether there are reasons against being inconsistent in the balance, since they count as reasons for being consistent. Accuracy-dominance arguments entail that one ought not to be inconsistent. So, even if Kolodny is right that there is no reason to satisfy Consistency, this does not entail that it is false that one ought to be consistent. Insofar as there are arguments against inconsistency (as suggested by accuracy-dominance arguments), there is a reason to be consistent.

However, if the Coextensivity Thesis is false, then accuracy-dominance arguments are compatible with the objection from truth-conduciveness. Here is why. Kolodny argues that there is no reason to be consistent: he denies that one ought to be consistent, as in Normativity\(+\). However, if the Coextensivity Thesis is false, we can deny Normativity\(+\) without denying Normativity\(-\). In other words, even if it is false that one ought to be consistent, perhaps one ought not to be inconsistent. The same goes for accuracy-dominance arguments. According to such arguments, inconsistent combinations of beliefs are dominated. So, one ought not to be inconsistent. But if the Coextensivity Thesis is false, this does not entail that one ought to be consistent.

3.3 Reasons to be Consistent and the Coextensivity Thesis

So, is the Coextensivity Thesis true? This depends on what “a reason to be consistent” means. Suppose, like Kolodny, that “a reason to be consistent” concerns each individual consistent option one has (see section 3.1. That is, suppose that “a reason to be consistent” means something like “a consideration that counts in favour of each individual consistent options one has.” For Kolodny, nothing can be said in favour of some consistent combinations of attitudes. So, under this interpretation of what “a reason to be consistent” means, we do not necessarily have a reason to be consistent.

Relative to this interpretation of what “a reason to be consistent” means, the Coextensitivity Thesis does not seem plausible. For reasons found in Snedegar (2018), we can make a distinction between reasons for Consistency (as in Normativity\(+\)) and reasons against inconsistency (as in Normativity\(-\)). The distinction comes from the following account of reasons for and reasons against endorsed by Snedegar:

My view puts a strong condition on reasons for and a weak condition on reasons against. For some objective to provide a reason for an option, that option has to do the best with respect to the objective. For some objective to provide a reason against an option, that option only has to do worse than some alternative. (2018, 737)

Snedegar roughly argues that the problem with views that lump together reasons against and reasons for is that there can be good reasons not to \(\phi\), even if there are worse alternatives to \(\phi\)-ing.16 For instance, suppose that I am trying to decide what to drink. I might have conclusive reason not to drink gin, but this does not entail that I have a reason to drink any beverage that isn’t gin. I should definitely not drink petrol, even if petrol isn’t gin. This is compatible with my having conclusive reason not to drink gin.

Snedegar’s observation sits well with accuracy-dominance arguments discussed in Section 2. Indeed, recall the options agents have in Table 1. Clearly, there is conclusive reason not to go for the inconsistent option, since neither believing nor disbelieving \(p\) is better than being inconsistent at every possible world. However, this does not entail that there is a reason in favour of every alternative to the inconsistent option. For instance, disbelieving \(p\) when \(p\) is true (or believing \(p\) when \(p\) is false) is worse than being inconsistent. So, as in the gin and petrol case, reasons against inconsistency are logically weaker than reasons for Consistency.

This suggests that accuracy-dominance arguments do not vindicate Normativity\(+\) on their own. Of course, when combined with the Coextensivity Thesis, these arguments support Normativity\(+\). But Kolodny’s interpretation of what “a reason to be consistent” means conflicts with the Coextensivity Thesis. So, while accuracy-dominance arguments support Normativity\(-\), it is an open question whether they also support Normativity\(+\).

Here is a response to my argument on behalf of the accuracy-dominance theorist. We can regroup the consistent options in Table 1 under a single option. Call this the consistent option. With respect to the consistent option, Snedegar’s distinction does not apply. If there is conclusive reason not to go for the inconsistent option, and the only option left is the “regrouped” consistent option, then reasons against inconsistency favour the consistent option. So, could there be a sense in which the Coextensivity Thesis is true?17

My response to this objection goes as follows. This way of framing the problem cannot make sense of Kolodny’s objection concerning some consistent options. There is something wrong with some consistent combinations of beliefs —some consistent combinations of beliefs are entirely wrong or improbable on the evidence. Kolodny is right to point out that nothing can be said in favour of these combinations of attitudes. The only way to make sense of Kolodny’s objection is not to regroup all the consistent options under a single label, precisely because relevant normative distinctions can (and should) be made between some consistent options.

At best, this reply shows that, under a different interpretation of what “a reason to be consistent” means, the Coextensivity Thesis is true. But Kolodny’s argument still succeeds relative to another interpretation of this expression. When Kolodny discusses the normativity of Consistency, he discusses the normativity of the individual consistent options one has, including the ones that are entirely wrong or improbable on the evidence. The accuracy-dominance theorist can claim that one ought to be consistent, but that is simply because the expression “one ought to be consistent” here refers to something logically weaker than what Kolodny has in mind.18

3.4 An Escape Route for the Accuracy-Dominance Theorist?

The accuracy-dominance theorist could then offer the following objection. Suppose there is an accuracy-dominance argument against one’s attitudes. Accordingly, one can identify at least one collection of attitudes that veritistically dominates one’s current state. If agents can identify at least one set of attitudes that is better than their current state, then they have a reason to take the dominating set of attitudes, which will be consistent. Doesn’t this support the view according to which one ought to be consistent? If agents ought to take dominating combinations of beliefs, and such combinations of beliefs are consistent, then this seems to entail that agents ought to be consistent.19

This objection carries weight depending on what accuracy-dominance arguments prove. Let me explain.

Suppose the contender is right. Then, accuracy-dominance vindications are akin to the Truth Vindication, the Knowledge Vindication or the Reasons Vindication discussed in Section 1. If one has inconsistent combinations of beliefs (say, one believes \(p\) and also believes \(\neg p\)), the Truth Vindication says that agents ought to maintain the true one (and abandon the false one), the Knowledge Vindication says that agents are only permitted to maintain the known one, and the Reasons Vindication says that agents are only permitted to maintain the reasonable one (and ought to abandon the unreasonable one). In any case, satisfying such norms means that agents will cease entertaining inconsistent combinations of beliefs.

The contender makes a similar point. If one has inconsistent combinations of beliefs, one should go for the option dominating inconsistent combinations of beliefs. But if that is right, the accuracy-dominance argument merely entails that agents ought (or have reasons) to have some combinations of beliefs, not any consistent combination of beliefs. In other words, the argument leaves out some consistent combinations of beliefs.

This brings us back to the discussion in Section 1. What do we expect from a good vindication of the normativity of Consistency? For many philosophers, a good vindication of Consistency should cover all the possible consistent combinations of beliefs. If the contender is right, then accuracy-dominance arguments can explain the significance of some consistent combinations of beliefs—namely, the dominating ones. But this is not what we were looking for. The explanation should apply to all the consistent combination of beliefs. To be clear: Some philosophers might not be interested in this specific interpretation of the “Why-Be-Consistent” debate. It should be clear that, with respect to other understandings of the question, the contender is right.

4 Conclusion and Implications in the Debate on the Normativity of Structural Rationality

This paper supports the view that there are two theses concerning the normativity of Consistency: Normativity\(+\) and Normativity\(-\). While accuracy-dominance arguments support Normativity\(-\), they might not necessarily support Normativity\(+\). This is so, because the Coextensivity Thesis might be false. In fact, one way to reconcile Kolodny’s objection from truth-conduciveness with accuracy-dominance arguments is to deny the Coextensivity Thesis.

These clarifications concerning Normativity\(+\) and Normativity\(-\) allow us to rethink the debate on the normativity of structural rationality. Indeed, a popular strategy for arguing against the normativity of structural rationality is to point out that there is no reason to satisfy some specific rational requirements (such as Consistency). Kolodny’s objection from truth-conduciveness is a good illustration of such arguments. These arguments are compelling if we focus on Normativity\(+\). But this might be a mistake. Perhaps that, when it comes to formal requirements like Consistency, the only view we should try to vindicate is Normativity\(-\).

The argument of this paper allows us to make sense of some pre-theoretically correct assumptions structural requirements of epistemic rationality such as Consistency. Plausibly, there is something wrong, suboptimal or disvaluable with inconsistent combinations of beliefs. The mistake might have been to try to explain this assumption in terms of an obligation to be consistent. But if I am right, we might only be able to explain this assumption in terms of an obligation not to be inconsistent. Hence, requirements like Consistency might merely be normative in a weak sense.

The good news is that we can now make sense of such a possibility. If the Coextensivity Thesis is false, it makes perfect sense to say that one ought not to be inconsistent without also saying that one ought to be consistent. There might not be something good with being structurally rational, but it seems patently clear that there is something bad with being structurally irrational.


  1. This requirement is sometimes called “Pairwise Consistency”, as in Easwaran (2016).↩︎

  2. See Way (2010) for an overview of this debate. See Fitelson (2016) on epistemic teleology and coherence requirements. See Bona and Staffel (2018) on accuracy and approximation of Bayesian requirements of probabilistic coherence. See also Pettigrew (2013, 2016a).↩︎

  3. Kolodny (2007b) endorses this view. See Daoust (2020) for discussion.↩︎

  4. In fact, Broome (2013, ch. 11) is interested in the stronger claim that rationality is a source of normativity. So, he is not interested in offering a derivative vindication of consistency requirements, that is, a vindication of these requirements on other grounds (like truth, knowledge, or reasons). By contrast, dominance principles are often tied to rationality (see e.g. Joyce 1998).↩︎

  5. Coates (2012) and Lasonen-Aarnio (2020) have argued that responding correctly to one’s evidence sometimes entail believing “P, but I am irrational to believe P”, which is an incoherent combination of attitudes. They conclude that such incoherence is not necessarily irrational. See Greco (2014), Horowitz (2014), Kiesewetter (2016), Littlejohn (2018), Titelbaum (2015) and Worsnip (2018a) for various responses to this view.↩︎

  6. See, among others, Broome (2013, sec.9.4), Kiesewetter (2017, ch. 10) and Way (2013) on the Instrumental Principle.↩︎

  7. I’m glossing over some inessential subtleties here. It is possible to assign a value to not believing \(p\) (or to withholding judgment on whether \(p\)), but ultimately, we would get exactly the same results. See Easwaran (2016, sec.C) and Dorst (2019, 10, n. 12).↩︎

  8. But this constraint might not stem from accuracy-first epistemology. See Steinberger (2019) and the next footnote.↩︎

  9. In addition to Dorst’s argument, see Easwaran (2016), Easwaran and Fitelson (2015) and Pettigrew (2016b) for similar arguments in favour of the conservative account of epistemic value. See Steinberger (2019) on why alternatives to conservatism are compatible with accuracy-first epistemology.↩︎

  10. Similar arguments can be found in Easwaran (2016§B) and Pettigrew (2016b, 256). Dorst (2019, 31, esp. proposition 3) argues for a similar but contextualist view.↩︎

  11. Elsewhere, Kolodny (2005) raises some objections against the normativity of other structural requirements, such as Inter-Level Coherence.↩︎

  12. See notably Goldman (2015) and Whiting (2010) on veritism.↩︎

  13. I thank a referee for inviting me to discuss this possibility.↩︎

  14. See Parfit (2011, 144–48) on this distinction.↩︎

  15. I thank a referee for inviting me to clarify this possibility.↩︎

  16. See Snedegar (2018) for more details.↩︎

  17. I thank a referee for inviting me to discuss this objection.↩︎

  18. My response might not convince some readers. In any case, we can draw a lesson from this discussion. We have learned that the expression “a reason to be consistent” is ambiguous. Some readings of this expression are a problem for Kolodny’s argument, and other readings of this expression conflict with vindicating Normativity\(+\).↩︎

  19. I thank a referee for bringing this objection to my attention.↩︎

References

Bona, Glauber de, and Julia Staffel. 2018. “Why Be (Approximately) Coherent?” Analysis 78 (3): 405–15. doi:10.1093/analys/anx159.
Broome, John A. 2013. Rationality Through Reasoning. Oxford: Wiley-Blackwell.
Coates, Allen. 2012. “Rational Epistemic Akrasia.” American Philosophical Quarterly 49 (2): 113–24.
Daoust, Marc-Kevin. 2020. “The Explanatory Role of Consistency Requirements.” Synthese 197: 4551–69. doi:10.1007/s11229-018-01942-8.
Dorst, Kevin. 2019. “Lockeans Maximize Expected Accuracy.” Mind 128 (509): 175–211. doi:10.1093/mind/fzx028.
Easwaran, Kenny. 2016. “Dr. Truthlove or: How I Learned to Stop Worrying and Love Bayesian Probabilities.” Noûs 50 (4): 816–53. doi:10.1111/nous.12099.
Easwaran, Kenny, and Branden Fitelson. 2015. “Accuracy, Coherence, and Evidence.” In Oxford Studies in Epistemology, edited by Tamar Szabó Gendler and John Hawthorne, V:61–96. Oxford: Oxford University Press.
Fitelson, Branden. 2016. “Coherence.” http://fitelson.org/coherence/coherence_duke.pdf. Unpublished manuscript.
Goldman, Alvin I. 2015. “Reliabilism, Veritism, and Epistemic Consequentialism.” Episteme 12 (2): 131–43. doi:10.1017/epi.2015.25.
Greco, Daniel. 2014. “A Puzzle about Epistemic Akrasia.” Philosophical Studies 167 (2): 201–19. doi:10.1007/s11098-012-0085-3.
Horowitz, Sophie. 2014. “Epistemic Akrasia.” Noûs 48 (4): 718–44. doi:10.1111/nous.12026.
Joyce, James M. 1998. “A Nonpragmatic Vindication of Probabilism.” Philosophy of Science 65 (4): 575–603. doi:10.1086/392661.
Kiesewetter, Benjamin. 2016. “You Ought to \(\phi\) Only If You May Believe That You Ought to \(\phi\).” The Philosophical Quarterly 66 (265): 760–82. doi:10.1093/pq/pqw012.
———. 2017. The Normativity of Rationality. Oxford: Oxford University Press.
Kolodny, Niko. 2005. “Why Be Rational?” Mind 114 (455): 509–63. doi:10.1093/mind/fzi509.
———. 2007a. “How Does Coherence Matter?” Proceedings of the Aristotelian Society 107: 229–63. doi:10.1111/j.1467-9264.2007.00220.x.
———. 2007b. “State or Process Requirements?” Mind 116 (462): 371–85. doi:10.1093/mind/fzm371.
Lasonen-Aarnio, Maria. 2020. “Enkrasia or Evidentialism? Learning to Love Mismatch.” Philosophical Studies 177: 597–632. doi:10.1007/s11098-018-1196-2.
Leitgeb, Hannes, and Richard Pettigrew. 2010. “An Objective Justification of Bayesianism i: Measuring Inaccuracy.” Philosophy of Science 77 (2): 201–35. doi:10.1086/651317.
Littlejohn, Clayon. 2018. “Stop Making Sense? On a Puzzle about Rationality.” Philosophy and Phenomenological Research 96 (2): 257–72. doi:10.1111/phpr.12271.
Parfit, Derek. 2011. On What Matters. Volume One. Oxford: Oxford University Press. Edited and introduced by Samuel Scheffler.
Pettigrew, Richard. 2013. “Accuracy and Evidence.” Dialectica 67 (4): 579–96. doi:10.1111/1746-8361.12043.
———. 2016a. Accuracy and the Laws of Credence. Oxford: Oxford University Press.
———. 2016b. “Jamesian Epistemology Formalised: An Explication of ‘the Will to Believe’.” Episteme 13 (3): 253–68. doi:10.1017/epi.2015.44.
Snedegar, Justin. 2018. “Reasons for and Reasons Against.” Philosophical Studies 175: 725–43. doi:10.1007/s11098-017-0889-2.
Steinberger, Florian. 2019. “Accuracy and Epistemic Conservatism.” Analysis 79 (4): 658–69. doi:10.1093/analys/any094.
Titelbaum, Michael G. 2015. “Rationality’s Fixed Point (or: In Defense of Right Reason).” In Oxford Studies in Epistemology, edited by Tamar Szabó Gendler and John Hawthorne, V:253–94. Oxford: Oxford University Press.
Way, Jonathan. 2010. “The Normativity of Rationality.” Philosophy Compass 5 (12): 1057–68. doi:10.1111/j.1747-9991.2010.00357.x.
———. 2013. “Intentions, Akrasia, and Mere Permissibility.” Organon F 20 (4): 588–611.
Whiting, Daniel. 2010. “Should I Believe the Truth?” Dialectica 64 (2): 213–24. doi:10.1111/j.1746-8361.2009.01204.x.
Worsnip, Alex. 2018a. “The Conflict of Evidence and Coherence.” Philosophy and Phenomenological Research 96 (1): 3–44. doi:10.1111/phpr.12246.
———. 2018b. “What Is (In)coherence?” In Oxford Studies in Metaethics, edited by Russ Shafer-Landau, XIII:184–206. Oxford: Oxford University Press.