Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-27T22:30:52.708Z Has data issue: false hasContentIssue false

Epistemic Sanity or Why You Shouldn't be Opinionated or Skeptical

Published online by Cambridge University Press:  24 November 2022

Rights & Permissions [Opens in a new window]

Abstract

I propose the notion of ‘epistemic sanity’, a property of parsimony between the holding of true but not false beliefs and the consideration of our cognitive limitations. Where ‘alethic value’ is the epistemic value of holding true but not false beliefs, the ‘alethic potential’ of an agent is the amount of extra alethic value that she is expected to achieve, given her current environment, beliefs, and reasoning skills. Epistemic sanity would be related to the holding of (true or false) beliefs that increase the agent's alethic potential (relevant beliefs) but not of beliefs that decrease it (this is related to cognitive parsimony). Suspension of judgment, forgetting, and clutter avoidance are the main contributors to an agent's epistemic sanity, where this paper focuses on suspension. I argue that rational suspension favors the holding of true and relevant beliefs, which is not the case for the extremes of opinionation (no suspension) and skepticism (general suspension). In the absence of evidence, opinionated agents are often forced to rely on principles such as the principle of indifference, but suspension dominates indifference in terms of alethic value in some conditions. A rational agent would only find it beneficial to adopt skepticism if she considers herself to be an anti-expert about her entire agenda, but then ‘flipping’ beliefs maximizes expected alethic value in relation to skepticism. The study of epistemic sanity results in an ‘impure’ veritism, which can deal with some limitations of veritism (e.g., explaining the existence of false but relevant beliefs).

Type
Article
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press

Introduction

An omniscient being does not need to (maybe, she shouldn't) suspend judgment about any proposition because she knows the truth-value of any proposition. An omnipotent being with unlimited cognitive resources does not need to (maybe, she shouldn't) forget any stored information. After all, she has an unlimited space in memory and unlimited computational power available for searching over any amount of retrieved information. For the same reason, she does not need to avoid cluttering her mind with irrelevant information. We are not such a being! One fundamental fact about our cognitive situation is that “human beings are in the finitary predicament of having fixed limits on their cognitive capacities and the time available to them” (Cherniak Reference Cherniak1986: 8). Epistemic rationality (in the following, ‘rationality’) seems to require ‘finite reasoners’ (those in the finitary predicament) to convert efficiently their scarce cognitive resources into epistemic value. Rationality seems to require finite reasoners to exhibit a form of ‘cognitive parsimony’.

This fact is often recognized in the cognitive sciences, where the “tractable cognition thesis” (van Rooij Reference van Rooij2008) is used to constrain the space of computational-level theories of rationality (e.g., Oaksford and Chater Reference Oaksford and Chater2007: 35). Epistemologists, on the other hand, tend to dismiss considerations about the cognitive limitations of finite reasoners as trading upon practical values, whereas epistemology should concern only the maximization of epistemic value. Various features (e.g., of sets of beliefs)Footnote 1 are regarded as putative sources of epistemic value: closure, coherence, amount of evidential support, etc. In the last decades, some epistemologists have argued for ‘veritism’, the thesis that the fundamental source of epistemic value is the believing of truths but not falsehoods: “[T]he fundamental source of epistemic value for a doxastic state is the extent to which it represents the world correctly: that is, its fundamental epistemic value is determined entirely by its truth or falsity” (Pettigrew Reference Pettigrew2019b: 761). These epistemologists often sought to justify Bayesian norms of rationality in veritistic terms, resulting in the epistemic utility theory (EUT, see Pettigrew Reference Pettigrew and Zalta2019a, for a review).

I intend to propose the notion of ‘epistemic sanity’, a property of parsimony between the holding of true but not false beliefs and the consideration of our cognitive limitations. ‘Sanity’ is sometimes used for attributing the absence of psychiatric illnesses. In this sense, the paradigm ‘epistemic illness’, caused by the lack of attentiveness to the evidence, would be the tendency to hold blatantly false and unjustified beliefs (e.g., conspiracy theorists). But ‘sanity’ also has the meaning of ‘mental health’ and this is the meaning that I am exploiting here to designate whether a finite reasoner is ‘in a good shape’ for achieving extra epistemic value. Where ‘alethic value’ is the epistemic value of holding true but not false beliefs, the ‘alethic potential’ of an agent is the amount of extra alethic value that she is expected to achieve, given her current environment, beliefs, and reasoning skills. Epistemic sanity would be related to the holding of (true or false) beliefs that increase the agent's alethic potential (relevant beliefs) but not of beliefs that decrease it (this is related to cognitive parsimony). Suspension of judgment, forgetting, and clutter avoidance (Harman Reference Harman1986: 12) would be the major contributors to an agent's epistemic sanity, where this paper focuses on suspension.

Epistemologists often work under a tripartite account of (categorical) doxastic attitudes: a reasoner may believe or disbelieve a proposition, but she may also hold an attitude of neutrality towards it (‘suspension of judgment’).Footnote 2 The mere lack of belief and disbelief should not be sufficient for suspension, as someone who has never considered a proposition does not hold a doxastic attitude (e.g., an attitude of neutrality) towards it (Friedman Reference Friedman2013b: 167). Suspending is an attitude of “committed neutrality” (Sturgeon Reference Sturgeon, Gendler and Hawthorne2010: 133) in the sense of being able to be adopted or dropped given reasons. Suspending judgment may be cognitively parsimonious because it may avoid the holding of many beliefs.Footnote 3 How suspension could be related to the increase of veritistic value is more difficult to devise. I intend to argue that rational suspension is an epistemic virtue between the two vices of opinionation (holding beliefs about every proposition in your agenda) and skepticism (holding beliefs about none) because it would favor the holding of true and relevant beliefs, which is not the case for opinionation or skepticism.

In the absence of evidence, opinionated agents are often forced to rely on principles such as the principle of indifference, but suspension dominates indifference in terms of alethic value in some conditions (see section 2.1). Using the expression ‘epistemic sanity’ is especially fortuitous when we are dealing with suspension because it was sometimes considered central to the mental well-being of rational reasoners. The ancient skeptics describe themselves as “the investigators”, but also as “those who suspend” (see Vogt Reference Vogt and Zalta2018). They often tell a story where they first find themselves in turmoil due to discrepancies in how things appear to them. They start hoping to achieve tranquility by settling on what is true and false, but their investigation leads them to find opposing views to be of equal weight. The skeptics then free themselves from turmoil by suspending, which finally brings to them “tranquility”. In section 2.2, I argue that an agent would only find it beneficial to adopt skepticism if she considers herself to be an anti-expert about her entire agenda, but then ‘flipping’ beliefs maximizes expected alethic value in relation to skepticism. Neither opinionation nor skepticism would be cognitively parsimonious because of the high cognitive cost of holding many beliefs (opinionation) and of maintaining suspension in the face of the evidence (skepticism).

In section 1, I argue that the formal models and methods currently used in epistemology are often inadequate for the study of epistemic sanity and propose adequate measures of alethic value and potential. The alethic potential of an agent should be understood as the amount of extra alethic value that she is expected to achieve, given her current environment, beliefs, and reasoning skills. I use the measure of alethic potential in a notion of epistemic relevance and argue that rational finite reasoners should strive to hold only beliefs that are true and relevant in this sense, as to maintain their epistemic sanity. In section 2, I discuss the different views about suspension and the vices of opinionation (section 2.1) and skepticism (section 2.2) and argue that these vices secure an illusory appearance of rationality by the means of an unreasonable appeal to the minimax principle (section 2.3). In the conclusions, I discuss how forgetting and clutter avoidance are related to epistemic sanity and how the study of epistemic sanity results in an ‘impure’ veritism, which can deal with limitations of veritism (e.g., the existence of false but relevant beliefs).

1. Alethic Potential

The formal models and methods currently used in epistemology are often inadequate for the study of epistemic sanity. Formal epistemologists often propose normative models as ideal reasoners (i.e., reasoners without cognitive limitations).Footnote 4 For example, although Leitgeb (Reference Leitgeb2014: fn. 3) recognizes that “ultimately, we should be concerned with real-world agents”, his “perfectly rational reasoner” (p. 137) is logically omniscient. The general strategy seems to be to propose normative models as ideal reasoners “whom we should strive to approximate” (fn. 3). I take issue with this strategy because what is rational for an ideal reasoner may not be rational for a finite reasoner. For example, why should we strive to approximate logical omniscience? Any attempt to do so would result in a form of cognitive paralysis (where all of our scarce cognitive resources would be wasted in deriving logical truths and logical consequences), which would prevent us from fulfilling our (epistemic and practical) goals.Footnote 5 But, leaving these concerns aside, the investigation of epistemic sanity would still be hindered because the ideal reasoners that are used as normative models often cannot suspend (forget or avoid clutter).

The ideal reasoners that are used as normative models in epistemology often cannot suspend. For example, EUT has developed some of the best evaluation methods in formal epistemology (e.g., Pettigrew Reference Pettigrew2016a). Investigations in EUT usually follow three steps. The first step is to define the ideal set of beliefs: “if a proposition is true in a situation, the ideal credence for an agent in that situation is the maximal credence, which is represented as 1. … [I]f a proposition is false, the ideal credence in it is the minimal credence, which is represented as 0” (Pettigrew Reference Pettigrew2016a: 3). The ideal reasoner (the reasoner with the ideal set of beliefs) cannot suspend because she holds a belief-value about every proposition in the agenda (i.e., she is opinionated). This fact by itself suggests that suspension is never the “correct” attitude: “According to the fundamental norm of correct belief, suspending judgment about p is neither correct nor incorrect. If one suspends judgment about p then one has neither got things right nor got things wrong about p” (Wedgwood Reference Wedgwood2002: 272). The ideal reasoners that are often used as normative models in epistemology also cannot forgetFootnote 6 or avoid clutter.Footnote 7

The second step in an investigation within EUT is to define the measure of epistemic value. The most common measure is one of inaccuracy (f), interpreted as the ‘distance’ between an agent's beliefs and those of the ideal reasoner. The measure of f is often a Brier score: $f( {\rm B} ) = \mathop \sum \nolimits_{\phi \in {\rm B}}( {v( \phi ) -b( \phi ) } ) ^2$, where B is the agent's belief-set, v(ϕ) is ϕ's truth-value (i.e., its ideal belief-value), and b(ϕ) is ϕ's belief-value for the agent. The most natural way in which agents earn epistemic value is by forming new beliefs (and not only by adjusting the values of prior beliefs). But if f was the correct measure of epistemic (dis)value, then we would have no reason to form new beliefs because, in doing so, we risk losing (but not earning) epistemic value.Footnote 8 If f was the correct measure, then suspension would be a ‘cheap’ way of minimizing epistemic disvalue (e.g., a full skeptic has minimum inaccuracy). The second most common measure of epistemic value is one of accuracy (t, see fn. 8 for an example), which may also be a Brier score: $t( {\rm B} ) = \mathop \sum \nolimits_{\phi \in {\rm B}} (1- |v( \phi ) -b( \phi )| ) ^2$. Carr (Reference Carr2015: 232) argues that the limitations of accuracy as a measure of epistemic value parallel those of inaccuracy: “the situation is reversed … Each new proposition added to the domain of a credence function increases the epistemic function's epistemic value, as long as the credence it assigns isn't maximally inaccurate”. The measure of inaccuracy forces skepticism; the measure of accuracy forces opinionation.

Inaccuracy or accuracy are inadequate measures of epistemic value for the investigation of epistemic sanity. These measures cannot assess the choice of holding (or not) some belief (e.g., adopting a new belief) and, consequently, may only be used to compare agents with belief-sets of the same size, which is often achieved by assuming that they are opinionated over a fixed and finite agenda.Footnote 9 This is a limitation for the study of epistemic sanity because this investigation demands the comparison between counterfactual situations where an agent holds and does not hold a belief. For example, suspension should be able to be adopted or dropped given reasons, which, in a veritistic framework, amounts to the belief-sets where an agent suspends and adopts a belief being comparable in terms of veritistic value (see also Carr Reference Carr2015: section 2.1). In addition, the measures of inaccuracy and accuracy attribute the maximum and the minimum value (respectively) to the absence of beliefs about some proposition in the agent's agenda (Carr Reference Carr2015: 333), but an adequate measure of epistemic value should attribute to suspension a value that is in between those of holding a false full belief (the minimum value) and a true full belief (the maximum value).Footnote 10

The investigation of epistemic sanity demands both a measure of inaccuracy (f), a measure of accuracy (t), and their integration using an adequate function α (from alethic value). The function α should have a set of beliefs B as input and return a numerical value x, where α(B) would depend on the amount of truth (t) and falsehood (f) in B (α(t, f) = x). An adequate function α(t, f) must: (r1) strictly increase with respect to (wrt) t (i.e., if t  > t, then α(t , f) > α(t, f)), which models that ‘the more truth the better’; and (r2) strictly decrease wrt f (if f  > f, then α(t, f ) < α(t, f)), which models that ‘the less falsehood the better’. Requirements r1 and r2 are accepted by Douven (Reference Douven2013: 436)Footnote 11 and put to work by Trpin and Pellert (Reference Trpin and Pellert2019), who use the function t − f. This is the ‘minimal’ function that fulfills r1 and r2, but consider the ‘problem of contradictory pairs’. The function t − f evaluates equally an agent who believes (as to the same degree) both propositions in a contradictory pair and one who believes neither because t − f = 0 when t = f (independently of whether t = f = 0), but the second agent should be evaluated higher than the first. This problem may be avoided by using weights Rt − Wf where R < W (see Fitelson and Easwaran Reference Fitelson, Easwaran, Gendler and Hathorne2015: 83).

My favorite measure of alethic value is not Rt − Wf but (t − f)/(t + f + c), where c > 0 is a ‘sensitivity’ constant: the smaller the c, the greater the benefit for believing truths and the penalty for believing falsehoods (see Dantas Reference Dantas2021, for a discussion). I prefer this function, among other things, because (i) it deals more naturally with the problem of contradictory pairsFootnote 12 and (ii) it considers the cognitive limitations of finite reasonersFootnote 13. For simplicity, I will refer the functions t − f and Rt − Wf as α, and postpone the defense of (t − f)/(t + f + c) to another paper (see Dantas Reference Dantas2021). The function α attributes to the absence of beliefs a value (0) that is in between those of holding a false full belief (negative) and a true full belief (positive). In this context, suspension is not a cheap way of achieving epistemic value, but the attitude of forfeiting a putative increase to avoid a putative decrease of value. Similar considerations hold for forgetting and clutter avoidance.

I propose that the epistemic sanity of an agent should be related to the maximization of her ‘alethic potential’, understood as the amount of extra alethic value that she is expected to achieve, given her current environment, beliefs, and reasoning skills:

Definition 1 (Alethic potential (Δα )): Δα = α 1 − α 0,

where α 0 is the α-value of the agent's current (or initial) set of beliefs and α 1 is the α-value that she is expected to achieve from a priori or a posteriori reasoning, given her current environment, beliefs, and reasoning skills. The notion of alethic potential is not function-dependent and may be adapted to any adequate measure of alethic value. For example, Olsson (Reference Olsson2011: 128) proposes a Δ-measure in the same lines of Δα using Goldman's V-value as base (see fn. 8).Footnote 14 That said, Goldman's V-value does not fulfill r2.

Why should we care about our alethic potential, especially when a lower initial α-value (α 0) is a factor for increasing it? Should we strive to hold more initial false beliefs (or fewer initial true beliefs) to increase our alethic potential? An anonymous reviewer has proposed the following analogy. Imagine that there is a new kind of utilitarian who proposes the quantity of ‘wealth potential’, defined as the difference between a society's current level of wealth and its expected level in a future time. As that society produces more wealth, its wealth potential tends to decrease. Suppose that this new kind of utilitarian claims that wealth potential is a neglected value that should be promoted. Should we promote it by ensuring that society doesn't become too prosperous? The reviewer has a point, but before discussing it, I want to stress an important dis-analogy. The maximum possible amount of wealth is finite because the amount of ecological resources is finite. Consequently, increasing the amount of wealth of a society necessarily decreases its wealth potential. This is not true for alethic potential because the amount of alethic value is potentially infinite, since there are infinitely many truths ‘out there’.

The notion of epistemic sanity should capture the dialectics between the infinite amount of truths ‘out there’ and the finite cognitive resources that are available for a finite reasoner to ‘convert’ those truths into alethic value (i.e., into true beliefs, see fn. 13). Then there is indeed a trade-off between rationality, understood as the straightforward maximization of (expected) epistemic value at a moment, and epistemic sanity (this is the reviewer's point). This trade-off is even more apparent if we do not assume that agendas are fixed and finite. If the number of true-believable truths is infinite, then there is no sense in which a finite reasoner can maximize epistemic value at a moment (or even get closer to maximizing it, see Dantas Reference Dantas2021). In this case, her capacity to keep increasing her epistemic situation is as important as the epistemic value that she currently possesses. This claim does not entail that we should strive to hold more initial false beliefs (or fewer initial true beliefs) because the expected α-value of an agent upon reasoning depends on her initial set of beliefs. False initial beliefs may cause a lower expected α-value by being used as premises for false conclusions. The lack of relevant true beliefs may cause the same by impairing the agent's capacity to draw conclusions from the available evidence.

A rational finite reasoner should increase her current α-value by holding true beliefs that increase her alethic potential (i.e., beliefs that are epistemically relevant to her):

Definition 2 (Epistemic relevance): The belief that ϕ is positively relevant to an agent with a set of beliefs B iff $\Delta \alpha ( {\rm B}\cup \{ \phi \} ) ) = \Delta \alpha ( {\rm B}{\rm \setminus }\{ \phi \} )$,

where Δα(B) is the alethic potential of the agent if her initial set of beliefs were B. The belief that ϕ is irrelevant iff $\Delta \alpha ( {{\rm B}\mathop \cup \nolimits \{ \phi \} } ) ) = \Delta \alpha ( {{\rm B}\backslash \{ \phi \} } )$ and it is negatively relevant iff $\Delta \alpha ( {{\rm B}\mathop \cup \nolimits \{ \phi \} } ) ) < \Delta \alpha ( {{\rm B}\backslash \{ \phi \} } )$. For simplicity, I will use ‘relevant’ for positively relevant beliefs and ‘non-relevant’ for both irrelevant and negatively relevant beliefs. The metaphor is that relevant beliefs ‘encapsulate’ the alethic value of many beliefs, in such a way that a finite reasoner may have that value ‘at her reach’ without needing to hold many beliefs. Epistemic sanity would be related to the holding of relevant but not non-relevant beliefs, independent of whether they are true or false.

Epistemic sanity would be an additional requirement of rationality for finite reasoners. Rational finite reasoners should maximize α-value by holding true but not false beliefs, but they also should not hold non-relevant beliefs, even if they are true. For example, if I believe that every A is a B, then the belief that a is B for every a that is A may be non-relevant to me (even if they are true) because they are easily derivable from the general belief when it is necessary. I should not hold those as explicit beliefs. A rational finite reasoner should not hold non-relevant beliefs because the holding of those beliefs could be a waste of cognitive resources that does not enhance her epistemic situation. False beliefs may increase the alethic potential of an agent, but only if they promote the acquisition of new true beliefs (i.e., if they are false but relevant beliefs).Footnote 15 Nevertheless, the alethic potential of that agent would be further increased if these false beliefs were withdrawn upon investigation. Rational finite reasoners should strive to hold only true and relevant beliefs to maintain their epistemic sanity.

There are technical issues with the measurement of the alethic potential of an agent. For example, how much reasoning should we allow between the measures of α 0 and α 1? I think we should let the agent reason until her beliefs become stable (i.e., they would not change if she continued reasoning, see Dantas Reference Dantas2021). Another issue is that this measurement can hardly be carried out analytically. The measurement is feasible in computational epistemology (e.g., Olsson Reference Olsson2011; Douven Reference Douven2013; Trpin and Pellert Reference Trpin and Pellert2019), where epistemologists design computer simulations of agents interacting with environments that are randomly generated from fixed parameters (a class of environments). The measure of α 0 is feasible because we have access to the belief-values of the agent and the features of the environment before a simulation runs. For the same reason, we can measure the α-value of the agent after the simulation halts. The ‘final’ α-value of an agent in a random environment is a contingent notion, but if the number of environments is large enough, then the mean of those values should approximate the agent's expected α-value after the investigation (α 1). The computational study of epistemic sanity is a matter for another paper.

2. Rational Suspension

I intend to argue that rational suspension favors the holding of true and relevant beliefs and that it is cognitively parsimonious. This discussion is not straightforward because there are different views on suspension. The credal view (e.g., Sturgeon Reference Sturgeon2008) states that suspending about a proposition is to hold ‘middling’ credences about it and its negation (i.e., credences that do not surpass the thresholds for belief and disbelief). The second-order view (e.g., Raleigh Reference Raleigh2019) states that suspending about a proposition involves (i) not holding first-order beliefs about it or its negation and (ii) holding a second-order belief such as that you cannot yet tell whether that proposition is true or false (Raleigh Reference Raleigh2019: 9). The interrogative view (Friedman Reference Friedman2015) states that suspending about a proposition involves adopting an interrogative attitude about it: to inquire actively about its truth. The anti-interrogative view (Lord Reference Lord, Schmidt and Ernst2020) states that suspending about a proposition involves adopting an anti-interrogative attitude about it: to overlook the evidence about its truth (e.g., because you consider it non-relevant). There is an interesting discussion about which of these views (if any) describes our pre-theoretical notion of suspension.Footnote 16 I will not attempt to settle this discussion here.

In discussing the epistemic features of suspension, I will assume the ‘normative core’ of the second-order, interrogative, and anti-interrogative views: if a rational agent suspends about a proposition, then she should not hold beliefs about it or its negation.Footnote 17 This combination of attitudes is impossible by definition in the second-order view (item ii in the last paragraph). Suspending about a proposition while holding beliefs about it or its negation is possible in the interrogative and anti-interrogative views, but this combination of attitudes should be seen as counternormative. For example, Friedman (Reference Friedman2015: 11) argues that holding an interrogative attitude about a proposition that you already believe may lead to irrational (because incessant) double-checking (the argument does not apply to middling credences). The holding of an anti-interrogative attitude about a proposition that you believe (including middling credences) would manifest a form of dogmatism that is irrational for fallible reasoners like us. A rational agent who believes a proposition should eventually stop actively looking for evidence about it but should keep herself open to that evidence (if it happens to appear).

The claim that rational suspension avoids the adoption of many beliefs does not declare its cognitive parsimony, because suspending may have its own cognitive cost. The discussion about the cognitive cost of suspending is also not straightforward because of the different views of suspension. There is empirical literature about the cognitive cost of belief formation that is independent of those views and applies to this discussion. The ‘Cartesian model’ states that the acceptance or rejection of incoming information (belief formation) is the product of an effortful assessment process that is after the ‘automatic’ (and relatively costless) processing of the information. Suspending in the presence of evidence would be as effortful as forming a belief about it. The ‘Spinozean model’ states that the acceptance of incoming information is part of its automatic processing and that its rejection occurs after (and is more effortfully than) this processing. Suspending in the presence of direct evidence would be more effortful than forming a belief about it. Nadarevic and Erdfelder (Reference Nadarevic and Erdfelder2013) present empirical data in favor of the Cartesian model, while Gilbert (Reference Gilbert1991), Hasson et al. (Reference Hasson, Simmons and Todorov2005), and Richter et al. (Reference Richter, Schroeder and Wöhrmann2009) present data in favor of the Spinozean. I will return to these models and the cognitive parsimony of suspension.

2.1. Opinionation

Opinionation is the non-parsimonious practice of an agent who holds beliefs about every proposition on her agenda, whereas an agent's agenda is the set of those propositions whose truth-values interest her. Opinionation may be rational when the agent possesses adequate evidence about every proposition on her agenda. The ‘problem of opinionation’ regards how an agent should set her beliefs for propositions that are on her agenda but about which she does not have evidence. This problem is without a solution for opinionated agents who hold only full beliefs. In the absence of evidence for propositions on her agenda, these agents could only adopt random and unmotivated beliefs. The situation is not so obvious for opinionated agents who hold credences because there are normative models regarding credences that require opinionation (e.g., the Bayesian model) and arguments within those models that prescribe how agents should set their credences in the lack of evidence.

The Bayesian model of rationality comprises the norms of probabilism and conditionalization. Probabilism states that a rational credence function must be consistent with the axioms of probability (see Kolmogorov Reference Kolmogorov1950). Conditionalization states that rationality requires a reasoner who learns some new piece of evidence to update her previous credences by using Bayesian conditionalization. These norms are supported by Dutch book arguments (see Vineberg Reference Vineberg and Zalta2016), the accuracy arguments from EUT (e.g., Joyce Reference Joyce1998; Leitgeb and Pettigrew Reference Leitgeb and Pettigrew2010), among others (e.g., the arguments from Cox's theorem). The accuracy arguments assume (officially, as a simplifying idealization) that agents are opinionated over a fixed agenda. But opinionation is also a normative consequence of the Bayesian model because probability functions are total and not partial functions and there is no update using Bayesian conditionalization from the absence of belief-values to a belief-value.Footnote 18 If probabilism and conditionalization are norms of rationality, so must be opinionation.

The Bayesian model of rationality has the following core requirements:

  1. b1 The agent's beliefs have continuously many values between 0 and 1 (credences);

  2. b2 The agent's credences are consistent with the axioms of probability;

  3. b3 The agent updates her credences in the face of additional evidence using Bayesian conditionalization.

Requirement b1 states that the Bayesian agent holds credences. The other requirements guarantee that she fulfills probabilism (b2) and conditionalization (b3). b1–b3 does not determine how a Bayesian agent should set her credences in the absence of evidence (other than that they should be probabilistic). The model is silent about the problem of opinionation. An often discussed particular case of this problem regards priors: how should an agent set her credences at the very beginning of her credal life? The problem of opinionation is more general than that of priors because the agent may still lack evidence about some proposition after the investigation and not only prior to it. Subjective Bayesians think that b1–b3 exhausts the Bayesian model. But then a Bayesian agent would be in the same position as an opinionated agent who holds only full beliefs: she may only adopt random and unmotivated beliefs (although probabilistic) in the lack of evidence about some proposition on her agenda. This is unsatisfactory because the Bayesian model requires rational agents to be opinionated.

Objective Bayesians (e.g., Landes and Williamson Reference Landes and Williamson2013) propose principles such as the principle of indifference as a solution to the problem of opinionationFootnote 19: in the absence of evidence, agents should distribute their credences equally among the propositions that express the alternative outcomes under consideration.Footnote 20 Pettigrew (Reference Pettigrew2016b) proposes an accuracy argument for the principle of indifference as a solution for the problem of priors.Footnote 21 He observes that the indifferent credence function for a set of propositions worst-case dominates any other (opinionated) credence function for that in terms of inaccuracy minimization. For example, consider an exhaustive and exclusive pair of propositions {ϕ, ψ} (e.g., the contradictory pair, where ψ ≡ ¬ϕ). The indifferent credence function for this set is such that cr 0(ϕ) = cr 0(ψ) = .5. For any credence function cr ≠ cr 0, it is the case that min(f(cr, w)) < min(f(cr 0, w)), where f is restricted to that set and relativized to the (epistemically?) possible situations w. The minimax principle (also referred to as ‘maximin’) states that a rational agent should minimize the loss in the worst-case scenario (minimize the maximum loss). If minimax was a general principle of rationality, then agents would be required to adopt the indifferent credence function in every situation. This is not the case, but it may be the case that agents should minimize the maximum loss at the beginning of their credal lives (or in the absence of evidence in general). This would be necessary for them to ‘initialize’ their opinionation, although they should update by using Bayesian conditionalization afterward.

Pettigrew's argument assumes opinionation. Without this assumption, it follows from the same premises that agents should suspend and not adopt indifference in the absence of evidence. If inaccuracy is the measure of epistemic value, then suspending about an exhaustive and exclusive non-unitary set of propositions dominates (and worst-case dominates) indifference. Suspension about such a set guarantees that f = 0 and indifference guarantees that f > 0 because it requires the agent to hold a positive credence in at least one false proposition. The situation is not so obvious if the function α is the measure of epistemic value because, in this case, suspension is not a cheap way of achieving epistemic value. Regardless, suspension dominates indifference in terms of alethic value in some conditions.Footnote 22 Indifference about an exhaustive and exclusive pair of propositions (e.g., a contradictory pair) guarantees that t = f and that t − f = 0. Suspension guarantees that t = f = 0 and also that t − f = 0. So far so good, but if the function α is to deal with the problem of contradictory pairs by adopting weights Rt − Wf such that R < W, then suspension still guarantees that α(t, f) = 0, but indifference guarantees that α(t, f) < 0. Suspension dominates indifference in this case. The same holds for other small exhaustive and exclusive sets non-unitary of propositions, depending on the relative sizes of W and R.Footnote 23

An agent may lack evidence about propositions either at α 0 (before investigating) or at α 1 (afterward). Maintaining indifference about propositions that you still lack evidence after investigation may not be epistemically sane because, in this case, suspending about these propositions may increase α 1 and, consequently, Δα. Adopting indifference about propositions that you lack evidence before investigating may be epistemically sane, but only if you quit indifference after the investigation (either by suspending or by adopting non-indifferent beliefs). In this case, the agent could have a lower α 0 in comparison to if he suspended, but not necessarily a lower α 1 because suspension does not dominate valued belief functions in general.Footnote 24 The same holds for opinionation. Opinionation is only epistemically sane if agents have evidence for all propositions on her agenda (when they should adopt a non-indifferent belief function) or when she expects to find all the relevant evidence through investigation (when they should adopt a non-indifferent belief function). Consequently, a rational finite reasoner should be opinionated only when she currently possesses, or expects to possess by exploring the environment adequate, evidence about every proposition on her agenda.

Opinionation should not be assumed (even as a simplifying idealization) in arguments for supporting normative conclusions because it artificially eliminates ‘from the competition’ non-opinionated epistemic practices that may be worth more (expected) alethic value than opinionated practices. In addition, the assumption of opinionation defeats the purpose of veritism, of supporting norms of rationality from truth-conduciveness alone (unless opinionated is also supported in such a way). This issue is highlighted by Littlejohn:

If we want to show that Agnes [an arbitrary agent] really should aspire to have partial beliefs that have certain properties, we need to think of Agnes’ available options as involving suspension and opinionation and we need a value theory that tells that Agnes could be better off for being opinionated. (Littlejohn Reference Littlejohn2015: 222)

The arguments in the former paragraphs suggest Agnes is only better off being opinionated in some cases (e.g., when she has evidence about every proposition on her agenda). In addition, the assumption of opinionation leaves out of the investigation the most natural way in which agents earn epistemic value, i.e., by acquiring new beliefs. Relaxing this assumption is problematic for EUT because their measure of epistemic value (inaccuracy) cannot be used to compare agents with different numbers of beliefs. The problem is avoided by using the function α as the measure of epistemic value.

2.2. Skepticism

Without claiming historical accuracy, I will follow Comesaña and Klein (Reference Comesaña, Klein and Zalta2019) in considering the “Pyrrhonian skepticism” (in the following, ‘skepticism’) to be absolute skepticism, i.e., the idea that a rational reasoner should suspend about every proposition on her agenda (‘general suspension’).Footnote 25 I will also presuppose that general suspension requires an agent not to hold first-order beliefs (see ‘normative core’ in section 2). Finally, I will presuppose that skepticism demands a general suspension that is persistent because non-skeptical agents may provisionally suspend (e.g., because they lack evidence) while intending to adopt beliefs as soon as they encounter adequate evidence.

Sextus Empiricus thus narrates the conversion of a rational agent into skepticism:

Men of talent, troubled by the anomaly in things and puzzled as to which of them they should rather assent to, came to investigate what in things is true and what is false, thinking that by deciding these issues they would become tranquil. The chief constitutive principle of skepticism is the claim that to every account an equal account is opposed; for it is from this, we think, that we come to hold no beliefs. (Sextus Empiricus Reference Empiricus2000, 1.12)

The idea is that an inquisitive agent will occasionally encounter conflicting evidence about every proposition on her agenda, in which case rationality would require her to adopt general suspension. It is unlikely that an actual agent encounters conflicting evidence about every proposition on her agenda, but the skeptics are prone to offer ‘general counterevidence’.Footnote 26 Independently of the nature of the evidence that would prompt a rational agent to adopt skepticism, there are objective conditions in which she would find it advantageous (or, at least, not disadvantageous). The adoption of skepticism involves general suspension, which amounts to α 1 = 0. Consequently, a rational agent would only find it advantageous to adopt skepticism if her total evidence is evidence for α 0 ≤ 0 (i.e., t ≤ f). Independently of its nature, the evidence necessary for the rational adoption of skepticism should be evidence sufficient for the belief that t ≤ f.

Sorensen (Reference Sorensen1987: 312) proposes that someone is anti-expert about a proposition ϕ when ϕ is true iff she does not believe that ϕ. Egan and Elga (Reference Egan and Elga2005) generalizes this notion to a set of propositions: “[A]n agent is an anti-expert wrt those propositions if the agent is confident in at least one of them, in the sense that his degree of belief in it is at least 90%; and at least half of the propositions that the agent is confident in are false” (Egan and Elga, Reference Egan and Elga2005: 84). Roughly, an agent is an anti-expert wrt a set of propositions iff her beliefs about those propositions are such that t ≤ f. In this context, the evidence necessary for the rational adoption of skepticism should be evidence sufficient for the agent to consider herself an anti-expert about everything she currently believes. Sorensen argues that it is never rational for an agent to believe herself to be an anti-expert, where his argument relies on two putative requirements of rationality: that a rational agent fulfills probabilism and that she is correct about her own beliefs (transparency).Footnote 27 Since a rational agent cannot believe herself to be an anti-expert, Sorensen argues, she should suspend about her anti-expertise while maintaining her other beliefs.Footnote 28

Egan and Elga agree that believing yourself to be an anti-expert is not rational, but they disagree about how a rational agent should react to evidence of her anti-expertise. They suggest that, in the face of evidence of her anti-expertise about a set of propositions, an agent should suspend about those propositions. In their argument, Egan and Elga rely on a somewhat stronger notion of anti-expertise: “when one becomes convinced that one's all-things-considered judgments in a domain are produced by an anti-reliable process, then one should suspend judgment in that domain” (Egan and Elga Reference Egan and Elga2005: 83). This claim relates anti-expertise not only to t ≤ f but also to Δα ≤ 0. Egan and Elga's proposal is in line with the skeptical story of conversion: if an agent believes herself to be an anti-expert about everything she believes (or may come to believe), then she should adopt general suspension. But there is a problem with the application of Egan and Elga's reaction to anti-expertise about everything that an agent believes. If you believe you are an anti-expert about everything you believe, then you believe you are an anti-expert about your anti-expertise. Consequently, you should suspend about being an anti-expert about everything you believe. But given that you suspended about your anti-expertise, for which reason should you suspend about your other beliefs?

The arguments of Sorensen and Egan and Elga rely on idealizations (e.g., probabilism) that I cannot assume in a study about epistemic sanity (see fn. 7). I agree with Bommarito (Reference Bommarito2010) that self-ascriptions of anti-expertise are not necessarily irrational for finite reasoners.Footnote 29 Based on veritistic considerations alone, an anti-expert who believes herself to be an anti-expert is in a better epistemic position than an anti-expert who does not believes herself to be an anti-expert: at the very bottom, the first has one more true belief than the second. In addition, the anti-expertise belief may serve as a flag for the reasoner to revise her other beliefs. At first, this claim could favor the conversion story of the skeptic because the maintenance of the anti-expertise belief (a second-order belief) could support the general suspension about all the agent's first-order beliefs. If the agent believes that α 0 ≤ 0 (i.e., t ≤ f) and that Δα ≤ 0, then adopting skepticism would result in α 1 = 0 and Δα ≥ 0. But, from the veritistic point of view, the rational revision from general anti-expertise is not one of general suspension but the ‘flipping’ of beliefs (i.e., the exchange of values between beliefs and disbeliefs), including the anti-expertise belief.

If an agent believes herself to be an anti-expert about everything that she believes, then she believes that t ≤ f and that α 0 ≤ 0. Adopting skepticism would result in t = f = 0 and α 1 = 0. But the agent expects (she believes so) that flipping will result in beliefs such that t > f and that α 1 > 0 (the inequality is strict because the anti-expertise disbelief would also come out true). If a rational agent should maximize expected value, then a rational agent in the presence of evidence of general anti-expertise should not adopt skepticism, but flip beliefs.Footnote 30 The same holds for alethic potential. The (expected) alethic potential of an agent who believes herself to be an anti-expert in Egan and Elga's stronger sense is Δα ≤ 0; the adoption of skepticism would result in Δα ≥ 0. But the alethic potential of flipping is even higher from the perspective of the agent because adopting skepticism results in α 1 = 0 and the expected value of flipping is α 1 > 0. Finally, skepticism is not cognitively parsimonious. Although it decreases the number of beliefs, both the Cartesian and the Spinozean models agree that suspending in the presence of evidence is at least as expensive as forming beliefs about the evidence. A rational finite reasoner should not adopt skepticism in any situation and she should react to evidence of anti-expertise by flipping beliefs.

2.3. Minimax

Pettigrew (Reference Pettigrew2016b: 45) acknowledges that the “primary demerit” of his argument is that it “relies on Minimax, which many will say is not a norm of rational choice”. Using the minimax principle to support the principle of indifference depends on the assumption of opinionation because suspension dominates (and worst-case dominates) indifference in some conditions. The unconstrained use of the minimax principle favors the adoption of skepticism. This is the case because suspension worst-case dominates (but does not dominate, see fn. 24) non-indifferent belief functions, except for those that only ascribe maximum and minimum value to necessary and impossible propositions (respectively). Part of the appeal of the skeptical story relies on a covert use of minimax: if there is minimal evidence of your general anti-expertise, in the face of the mere possibility of you being wrong about most of your beliefs, then in the worst-case situation you are wrong about most of your beliefs. In that situation, you would be better off adopting general suspension. Consequently (by the minimax principle), you should adopt general suspension in the actual situation.

Minimax is a very conservative principle of decision-making that makes all but the most risk-averse behavior irrational. Harsanyi (Reference Harsanyi1975: 595) comments that the minimax principle was generally accepted in decision theory from the mid-forties to the mid-fifties, but since then the general opinion is that it leads to “serious paradoxes” and “wholly unacceptable practical decisions”: “If you took the maximin principle seriously then you could not ever cross a street (after all, you might be hit by a car); you could never drive over a bridge (after all, it might collapse); you could never get married (after all, it might end in a disaster), etc.”. The unrestricted use of minimax as a principle of rationality results in a very conservative model of rationality, which ultimately leads to skepticism.Footnote 31 To my mind, it is absurd to care about worst-case maximization when you can maximize actual value or expect to do so (e.g., by flipping beliefs). Minimax is a reasonable principle of rationality only when it does not contradict well-established principles, such as dominance and maximization of expected value.

Pettigrew's reaction to the danger of skepticism to limit the scope of minimax to the beginning of our credal lives: “[I]t [the minimax principle] applies only at the beginning of an agent's credal life, before she has acquired any evidence and before she has assigned credences to the propositions she entertains. … For an agent at any other stage of her credal life, Minimax does not apply. Instead, in those situations, the agent ought to maximise her subjective expected utility” (Pettigrew Reference Pettigrew2016b: 45). We have seen that Pettigrew's restriction is not sufficient for justifying the principle of indifference without assuming opinionation. This restriction also presupposes opinionation because it assumes that the beginning of an agent's credal life is the only moment “before she has acquired any evidence” or “before she has assigned credences to the propositions she entertains”. The restriction of minimax that does not presuppose opinionation is to the situations where the agent does not have evidence for some proposition on her agenda. In those situations, the agent should often suspend, but she should do so not because she should act to maximize worst-case value, but because suspension minimizes actual or expected value. These are cases in which minimax does not contradict the well-established principles of rationality.

3. Conclusions

The three major contributors to epistemic sanity have in common that they favor the holding of true and relevant beliefs and are cognitively parsimonious. Rational finite reasoners should suspend in the absence of evidence. Suspending in the absence of evidence avoids opinionation, which is not cognitively parsimonious (it involves the adoption of many beliefs) and whose epistemic value may be dominated by that of suspension in some situations. Suspending in the absence of evidence is cognitively parsimonious independently of the Cartesian and the Spinozean models because, in both models, it does not require dedicated cognitive resources. Section 2.2 claims that a rational finite reasoner should not adopt general suspension in the face of evidence of her anti-expertise because she expects to achieve more epistemic value by flipping her beliefs in the direction that the evidence suggests.Footnote 32 Suspending in the presence of evidence may not be parsimonious because it may have a cognitive cost that is higher than that of belief formation (e.g., if the Spinozean model is correct). In other words, a rational finite reasoner should suspend about propositions that are not ‘suggested’ by the evidence. These propositions are non-relevant in the sense that they are unlikely to be used in inferences with other relevant beliefs as premises.

Forgetting is often considered a cognitive shortcoming. This is a consequence of the idea that the role of memory is simply to store the information acquired in the past and make it available for future use. There is a growing consensus that memory has an active role in information processing (Klein et al. Reference Klein, Robertson and Delton2009; De Brigard Reference De Brigard2013). In this context, Andonovski (Reference Andonovski2020) proposes that memory should be understood as a “faculty of triage”, whose role is to make the “right” information available given important constraints of time and cognitive resources. Forgetting would favor cognitive parsimony by, for example, reducing the demands on cognitive central processes that would otherwise be needed to suppress interference (Baddeley et al. Reference Baddeley, Eysenck and Anderson2020: 307). Michaelian (Reference Michaelian2011) claims that virtuous forgetting may also increase the reliability of a memory system, which is related to the increase of α-value (see fn. 13). Consider a subject whose other cognitive systems are reliable so that the records stored in her memories are accurate at the moment of storage. As the world changes, some of the once-accurate records will become inaccurate. The older the record, the greater the chance that it has become inaccurate. A virtuous memory system would forget ‘older’ records (in the sense of not being retrieved lately) as a means to forget inaccurate records (Michaelian Reference Michaelian2011: 407).Footnote 33 ‘Older’ records are usually non-relevant in the sense that they are unlikely to be used in inferences with other relevant beliefs as premises.

The principle of clutter avoidance (CA) states that “one should not clutter one's mind with trivialities” (Harman Reference Harman1986: 12). Harman motivates his principle by appealing to its cognitive parsimony: “There is a limit to what one can remember, a limit to the number of things one can put into long-term storage, and a limit to what one can retrieve. It is important to save room for important things and not clutter one's mind with unimportant matters” (Harman Reference Harman1986: 41–2). The epistemic features of CA are difficult to discuss because of its usual subjective interpretation. Harman (Reference Harman1986: 55) warns us not to clutter our minds with “matters in which one has no interest”, but what makes a subject interesting? “Roughly, it's to have some interest or desire served by having beliefs (or knowing) about the relevant subject matter” (Friedman Reference Friedman2018: 3). This subjective interpretation of CA is not very interesting to epistemologyFootnote 34: a conspiracy theorist certainly finds his favorite conspiracy interesting in this sense, but this is exactly the kind of belief that one should avoid cluttering his mind with. The notion of relevance may be used in an epistemic interpretation of CA: one may believe that ϕ only if ϕ is relevant to her. Then clutter avoidance would contribute to epistemic sanity by avoiding the holding of non-relevant beliefsFootnote 35. The study of forgetting and clutter avoidance will be carried out in other papers.

There is a residual problem with the notion of epistemic sanity that is related to the reviewer's point. An agent may still ‘pump’ her alethic potential by initially holding false but irrelevant beliefs and dumping them during the investigation phase. This strategy may increase her Δα in such a way that is not rational or cognitively parsimonious. There is a slightly different notion of epistemic sanity that may be used to deal with this problem. The ‘mean relevance’ of the beliefs of an agent with a set of beliefs B and alethic potential of Δα is the following quantity:

Definition 3 (Mean relevance): Δα/|B|,

where |B| is the number of beliefs in B. The maximization of mean relevance is related to epistemic sanity not only because this quantity varies directly wrt the alethic potential (Δα), but also because it varies inversely wrt to the number of beliefs of the agent (|B|). The maximization of mean relevance demands the holding of few but relevant beliefs.Footnote 36 Opinionation tends to decrease the mean relevance because it demands the holding of non-relevant beliefs (e.g., indifferent beliefs). Skepticism would be a degenerate case of suspension that does not increase mean relevance (this value is not even defined for full skeptics). Finally, rational suspension increases mean relevance because it involves the maintenance of fewer but relevant beliefs.

The notion of alethic potential is relative to an agent (e.g., her beliefs and reasoning skills) but also to which environment the agent is (because α 1 is calculated as the mean α-value in similar environments). In this sense, the investigation of epistemic sanity gives rise to a form of ‘impure veritism’, where the measure of epistemic value is veritistic but where which evidence is available is also relevant. The notion of mean relevance may be used in a quantitative notion of relevance for individual beliefs. The relevance of a belief that ϕ ∈ B may be defined as

$$\Delta \alpha ( {\rm B}) /\vert {\rm B}\vert -\Delta \alpha ( {\rm B}\backslash \{ \phi \} ) /\vert {\rm B}\backslash \{ \phi \} \vert , \;$$

where B is the set of beliefs of the agent. The measure of individual relevance enables impure veritism to deal with some limitations of ‘pure’ veritism. For example, the measure of individual relevance may be used to explain the existence of “epistemically useful falsehoods” (Elgin Reference Elgin, Fitelson, Borges and Braden2019). These are false beliefs with high relevance, which promote the adoption of true and relevant beliefs by the agent (see fn. 15 for an example). The measure of individual relevance may also be used to answer some criticisms of veritism. For example, DePaul (Reference DePaul and Steup2001) claims that veritism implies that all true beliefs are equally epistemically valuable, but that this implication is false because there are cases where two sets each containing an equal number of true beliefs intuitively differ in epistemic value. The impure veritism does not support that implication because (the beliefs in) the two sets of beliefs may differ in their epistemic relevance. These are matters for other papers.Footnote 37

Footnotes

1 I use ‘belief’ as a general term, which encompasses both full beliefs and credences.

2 Belief and disbelief are often treated as being fundamentally the same attitude (disbelieving ϕ would be the same as believing ¬ϕ). There are reasons for rejecting this trend (e.g., somebody who is not competent with the negation may be able to disbelieve, see Lord Reference Lord, Schmidt and Ernst2020: fn. 1), but, for simplicity, I will follow the trend.

3 The situation is not so simple because there are different views about suspension and, in some views, suspending demands the adoption of beliefs (e.g., middling credences or second-order full beliefs). In addition, suspending may have its own cognitive cost and fail to be cognitively parsimonious. I discuss these issues in section 2.

4 I am agreeing with Garber (Reference Garber and Earman1984: 101) that normative claims about rationality and descriptive claims about an ideal reasoner are inter-translatable: “The Bayesian thought policeman [who enforce normative claims] might be thought of as clubbing us into behaving like ideal learning machines [ideal reasoners], if we like. Or we can think of the ideal learning machine as an imaginary person who behaves in such a way that he never needs correction by the Bayesian thought police. The two models thus seem inter-translatable.”

5 For example, believing every logical truth (e.g., h∨¬h, (h∨¬h)∨¬(h∨¬h), etc.) and every logical consequence of some evidence e (e.g., $e\wedge ( {h\vee \neg h} )$, $e\wedge ( {( {h\vee \neg h} ) \vee \neg ( {h\vee \neg h} ) } )$, etc.) is often irrelevant to our goals, but any attempt to do so would result in the spending of our scarce cognitive resources. In addition, attempting to hold those beliefs would not be truth-conducive in general because it would often amplify minor mistakes. For example, if e is false, then all those logical consequences are false as well. These issues are related to the principle of clutter avoidance, which will be discussed in the conclusions.

6 The Bayesian ideal reasoner updates her credences by using Bayesian conditionalization (Leitgeb and Pettigrew Reference Leitgeb and Pettigrew2010). Given the standard Bayesian assumptions of normality and finite additivity, this form of conditionalization is such that if a reasoner reaches maximum credence (certainty) on a proposition at a time, her certainty will be maintained after any subsequent update. For example, the Bayesian ideal reasoner cannot be certain that she is having spaghetti for dinner today (because she is doing so) and forget this irrelevant fact a year later (i.e., lose certainty about it) (see Talbott Reference Talbott1991: 139).

7 The Bayesian ideal reasoner has beliefs that are consistent with the axioms of probability, including normality and finite additivity. Normality entails that she must be certain of (i.e., hold maximum credence on) every logical truth. Also, if she comes to learn some evidence, then normality and finite additivity require her to be certain of every logical consequence of that evidence. This is a form of logical omniscience (see Garber Reference Garber and Earman1984: 104), which is incompatible with clutter avoidance (see fn. 5).

8 This parallels the argument against attentiveness to the currently available evidence being the fundamental epistemic value (Goldman Reference Goldman2002: §3). Goldman's measure of epistemic value is his “V-value”: $t( {\boldsymbol B} ) = \mathop \sum \nolimits _{\phi \in {\boldsymbol B}}1-{\rm \mid }v( \phi ) -b( \phi ) {\rm \mid }$ (Goldman Reference Goldman1999: §3.4).

9 Carr (Reference Carr2015: 223) also stresses this point: “Epistemic decision theory [EUT] usually presupposes that the credence functions it compares are defined over the same algebra of propositions. Once we abandon this presupposition, new difficulties arise.” Dantas (Reference Dantas2021) discusses the difficulties related to the assessment of infinite agendas.

10 The third step in an investigation within EUT is to use the measure of inaccuracy and a principle of decision theory in an argument for some norm of rationality. I will discuss some uses of principles of decision theory in section 2.3.

11 “The basic intuition underlying it is clear enough, to wit, that the higher one's degree of belief in a true proposition is, the more accurate one is, ceteris paribus, and also the lower one's degree of belief in a false proposition is, the more accurate one is, ceteris paribus” (Douven Reference Douven2013: 436).

12 This function is such that (t − f)/(t + f + c) > (t − f)/(t + f + c + 2x) when t > f, where x is the degree of belief that the agent holds in each proposition of the contradictory pair. The situation changes when t ≤ f, but this is a case of anti-expertise where the holding of contradictory beliefs may be epistemically beneficial for the agent (see section 2.2).

13 I suppose that a subject believes that ϕ if she is disposed to retrieve some record from her memory and to accept it as a veridical representation that ϕ because (explicit) belief is usually thought to involve both a mental representation and the positive assessment of it (Bogdan Reference Bogdan1986). Although it is accepted that there is no interesting limit on the amount of information that we can hold in long-term memory (Dudai Reference Dudai1997), it is accepted that the learning of new information can adversely impact our capacity of retrieving old information and vice versa (“interference”, see Baddeley et al. Reference Baddeley, Eysenck and Anderson2020: 291). This cognitive limitation is modeled if α(t, f) = (t − f)/(t + f + c) because α(t + 2, f) − α(t + 1, f) < α(t + 1, f) − α(t, f), which may be interpreted as a diminishing reward for ‘believing too much’.

14 “Suppose that a question begins to interest agent S at time t 1 [i.e., its possible answers are on her agenda], and S applies a certain practice π in order to answer the question. If the result of applying π is to increase the V-value of the belief states from t 1 to t 2, then π deserves positive credit. If it lowers the V-value, it deserves negative credit. If it does neither, it is neutral wrt to instrumental V-value” (Olsson Reference Olsson2011: 128).

15 A Ptolemaic astronomer may use her false beliefs about the deferent and epicycle orbits of the Earth, Mars, and the Sun and what she takes to be their current positions, to predict correctly that Mars will be visible from Earth on September 2, 2003 (Elgin Reference Elgin, Fitelson, Borges and Braden2019: 26). I will return to false but relevant beliefs in the conclusions.

16 For example, Lord (Reference Lord, Schmidt and Ernst2020: 134) argues that suspension cannot require second-order beliefs because those beliefs demand some intellectual sophistication that is out of reach of young children and non-human animals who could suspend. Raleigh (Reference Raleigh2019: 10) replies that we hesitate to ascribe suspension to young children and non-human animals because suspension in fact demands some intellectual sophistication.

17 This assumption does not hold for the credal view as I have described it: if full beliefs and suspension are defined as the holding of certain credences, then it is impossible to suspend about a proposition without holding credences about it. The assumption holds only partially (but sufficiently for our purposes) if the credal view is understood as a normative account about how a subject who holds both full beliefs and credences should update her full beliefs given her credences (e.g., Dorst Reference Dorst2017). In this case, a rational reasoner who suspends about a proposition should not hold full beliefs about it or its negation. Nevertheless, the credal view (in both versions) may be in tension with our pre-theoretical notion of suspension (Friedman Reference Friedman, Gendler and Hawthorne2013a: 62) because, supposedly, a rational agent who suspends about a number of probabilistic independent propositions should be able to suspend about their conjunction, but, if the number of propositions is large, the view states that she disbelieves it (or should do so).

18 As Easwaran recognizes: “Situations in which the agent comes to have credences in new propositions seem very different from the standard examples where an agent just learns that some proposition is true. … Bayesians already know that these cases are difficult ones to account for” (Easwaran Reference Easwaran2013: 122).

19 The following considerations also hold for principles that are close to the principle of indifference, such as that of maximum entropy (Landes and Williamson Reference Landes and Williamson2013). For example, Landes and Williamson rely on a form of minimax principle in justifying their principle.

20 For simplicity, I have restricted the principle to the finite case. The infinite case is not straightforward because this formulation would be incompatible with countable additivity.

21 Pettigrew's argument targets only the problem of priors (and not that of opinionation) because the principle of decision theory used in the argument (the minimax principle) would only hold at the very beginning of our credal lives (see section 2.3).

22 This line of reasoning does not apply directly to the credal view, where indifference about non-unitary but small exhaustive and exclusive sets of propositions is (or requires) suspending about those propositions. Indifference about large exhaustive and exclusive sets of propositions is (or requires) disbelieving those propositions. Here, the principle of indifference commits agents to the falsity of propositions about which they have no evidence. This is a counter-intuitive prescription, which supports Friedman's complaint (see fn. 17).

23 If the measure of epistemic value is α(t, f) = (t − f)/(t + f + c), then suspension dominates indifference for exhaustive and exclusive non-unitary set of propositions of any size as long as t > f because, in this case, α(t, f) > α(t + x, f + y) for x, y > 0.

24 Suspending does not dominate valued belief functions in general because in some situations these functions end up getting things more right than wrong. For example, the function cr(ϕ) = 0.9 and crϕ) = 0.1 worth more than suspending about this pair in the situation that ϕ is true.

25 See Frede (Reference Frede, Burnyeat and Frede1997) for a historically accurate picture of the skeptical notions of belief and suspension.

26 “When someone propounds to us an argument we cannot refute, we say to him: ‘Before the founder of the school to which you adhere was born, the argument of the school, which is no doubt sound, was not yet apparent, although it was really there in nature. In the same way, it is possible that the argument opposing the one you have just propounded is there in nature but is not yet apparent to us; so we should not yet assent to what is now thought to be a powerful argument’” (Sextus Empiricus Reference Empiricus2000: 1.34).

27 Suppose that a rational agent believes herself to be an anti-expert about ϕ. If she believes that ϕ, she believes she believes that ϕ (transparency). But she also believes (and believes that she believes) that ¬ϕ (by modus tollens on the anti-expertise belief), which leads to incoherence.

28 “Since we are warranted in making costly revisions to our background assumptions to escape acceptance of an inconsistent proposition, we are also justified in paying a high price to avoid positions which cannot be consistently accepted” (Sorensen Reference Sorensen1987: 312).

29 “Whether or not Perfectly Rational Agents ever find themselves self-ascribing anti-expertise, such self-ascriptions can often be the most rational choice for epistemic mortals like us. In the fortunate cases where we find ourselves able to consciously change our beliefs or withhold regarding a topic, such doxastic changes often require a good deal of time to execute. During this time, to deny our own epistemic failures is not only to be dishonest with ourselves but also to rob ourselves of one of the strongest motives to keep attempting to bring about a change” (Bommarito Reference Bommarito2010: 418).

30 The strategy of flipping beliefs exploits the understanding that “an anti-expert is as useful as an expert since you can convert anti-expert beliefs into expert beliefs by accepting their negations” (Sorensen Reference Sorensen1987: 312).

31 Pettigrew agrees that minimax is a conservative principle of rationality, but he allies himself with such conservativism: “I have no argument for making this alliance. At this point, it seems to me, we have reached normative bedrock: one cannot argue for cognitive conservatism from more basic principles” (Pettigrew Reference Pettigrew2016b: 46). Pettigrew's conservativism is not so extreme as to lead to skepticism because he restricts the minimax principle to the beginning of an agent's credal life.

32 This practice is cognitively parsimonious because it avoids the cognitive cost of rejecting the incoming information, which is costly in both the Cartesian and the Spinozean models.

33 This claim is supported by empirical research. Schooler and Hertwig (Reference Schooler and Hertwig2005), for example, elaborate on the notion of “beneficial forgetting” by proposing that loosing information may aid the recognition heuristic, which relies on failures of recognition to infer which of two objects scores higher on a criterion.

34 Friedman (Reference Friedman2018: 15) discusses the consequences of this interpretation to epistemology and concludes that “we're left with a highly interest-driven picture of how we ought to revise our doxastic states”. For example, if CA is a meta-principle that constrains principles of belief revision, then “all sorts of purely evidentialist and reliabilist potential norms are not genuine norms” (Friedman Reference Friedman2018: 9).

35 Michaelian (Reference Michaelian2011: 419) proposes the analogous principle of clutter elimination (CE): “if one's mind is cluttered with trivialities, one should remove them”. The epistemic interpretation of CE is the contrapositive of the epistemic interpretation of CA: if the belief that ϕ is non-relevant to an agent, then she ought not to believe that ϕ (e.g., she should forget that ϕ).

36 The measure of mean relevance is also feasible within computational epistemology, but this investigation is also a matter for another paper.

37 Acknowledgements: National Council for Scientific and Technological Development (CNPq) and Coordination for the Improvement of Higher Education Personnel (CAPES).

References

Andonovski, N. (2020). ‘Memory as Triage: Facing Up to the Hard Question of Memory.’ Review of Philosophy and Psychology 12(2), 227–56.CrossRefGoogle Scholar
Baddeley, A., Eysenck, M. and Anderson, M. (2020). Memory, 3rd edn. London: Routledge.CrossRefGoogle Scholar
Bogdan, R.J. (ed.) (1986). ‘The Importance of Belief.’ In Belief: Form, Content, and Function, pp. 116. Oxford: Oxford University Press.Google Scholar
Bommarito, N. (2010). ‘Rationally Self-ascribed Anti-expertise.’ Philosophical Studies 151(3), 413–19.CrossRefGoogle Scholar
Carr, J. (2015). ‘Epistemic Expansions.’ Res Philosophica 92(2), 217–36.CrossRefGoogle Scholar
Cherniak, C. (1986). Minimal Rationality. Cambridge, MA: MIT Press.Google Scholar
Comesaña, J. and Klein, P. (2019). ‘Skepticism.’ In Zalta, E.N. (ed.), The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/win2019/entries/skepticism/.Google Scholar
Dantas, D. (2021). ‘How to (Blind)spot the Truth: An Investigation on Actual Epistemic Value.’ Erkenntnis. doi: 10.1007/s10670-021-00377-x.CrossRefGoogle Scholar
De Brigard, F. (2013). ‘Is Memory for Remembering? Recollection as a Form of Episodic Hypothetical Thinking.’ Synthese 191(2), 155–85.CrossRefGoogle Scholar
DePaul, M. (2001). ‘Value Monism in Epistemology.’ In Steup, M. (ed.), Knowledge, Truth, and Duty: Essays on Epistemic Justification, Responsibility, and Virtue, pp. 170–82. Oxford: Oxford University Press.CrossRefGoogle Scholar
Dorst, K. (2017). ‘Lockeans Maximize Expected Accuracy.’ Mind 128(509), 175211.CrossRefGoogle Scholar
Douven, I. (2013). ‘Inference to the Best Explanation, Dutch books, and Inaccuracy Minimisation.’ Philosophical Quarterly 63(252), 428–44.CrossRefGoogle Scholar
Dudai, Y. (1997). ‘How Big is Human Memory, or on Being Just Useful Enough.’ Learning & Memory 3(5), 341–65.CrossRefGoogle ScholarPubMed
Easwaran, K. (2013). ‘Expected Accuracy Supports Conditionalization – And Conglomerability and Reflection.’ Philosophy of Science 80(1), 119–42.CrossRefGoogle Scholar
Egan, A. and Elga, A. (2005). ‘I Can't Believe I'm Stupid.’ Philosophical Perspectives 19(1), 7793.CrossRefGoogle Scholar
Elgin, C. (2019). ‘Epistemically Useful Falsehoods.’ In Fitelson, B., Borges, R. and Braden, C. (eds), Themes from Klein: Knowledge, Scepticism, and Justification, pp. 2538. Dordrecht: Springer.CrossRefGoogle Scholar
Fitelson, B. and Easwaran, K. (2015). ‘Accuracy, Coherence and Evidence.’ In Gendler, T. and Hathorne, J.(eds), Oxford Studies in Epistemology, Volume 5, pp. 6196. Oxford: Oxford University Press.Google Scholar
Frede, M. (1997). ‘The Skeptic's Beliefs.’ In Burnyeat, M. and Frede, M. (eds), The Original Sceptics: A Controversy, pp. 125. Indianapolis, IN: Hackett Publishing Co.Google Scholar
Friedman, J. (2013 a). ‘Rational Agnosticism and Degrees of Belief.’ In Gendler, T. and Hawthorne, J. (eds), Oxford Studies in Epistemology, Vol. 4, pp. 5781. Oxford: Oxford University Press.CrossRefGoogle Scholar
Friedman, J. (2013 b). ‘Suspended Judgment.’ Philosophical Studies 162(2), 165–81.CrossRefGoogle Scholar
Friedman, J. (2015). ‘Why Suspend Judging?’ Noûs 51(2), 302–26.CrossRefGoogle Scholar
Friedman, J. (2018). ‘Junk Beliefs and Interest-driven Epistemology.’ Philosophy and Phenomenological Research 97(3), 568–83.CrossRefGoogle Scholar
Garber, D. (1984). ‘Old Evidence and Logical Omniscience in Bayesian Confirmation Theory.’ In Earman, J. (ed.), Testing Scientific Theories, Vol. 10, pp. 99131. Minneapolis, MN: University of Minnesota Press.CrossRefGoogle Scholar
Gilbert, D. (1991). ‘How Mental Systems Believe.’ American Psychologist 46(2), 107.CrossRefGoogle Scholar
Goldman, A. (1999). Knowledge in a Social World. Oxford: Oxford University Press.CrossRefGoogle Scholar
Goldman, A. (2002). ‘The Unity of the Epistemic Virtues.’ In Pathways to Knowledge, pp. 5172. Oxford: Oxford University Press.CrossRefGoogle Scholar
Harman, G. (1986). Change in View: Principles of Reasoned Revision. Cambridge, MA: MIT Press.Google Scholar
Harsanyi, J.C. (1975). ‘Can the Maximin Principle Serve as a Basis for Morality? A Critique of John Rawls's Theory.’ American Political Science Review 69(2), 594606.CrossRefGoogle Scholar
Hasson, U., Simmons, J.P. and Todorov, A. (2005). ‘Believe It or Not: On the Possibility of Suspending Belief.’ Psychological Science 16(7), 566–71.CrossRefGoogle ScholarPubMed
Joyce, J. (1998). ‘A Nonpragmatic Vindication of Probabilism.’ Philosophy of Science 65(4), 575603.CrossRefGoogle Scholar
Klein, S., Robertson, T. and Delton, A. (2009). ‘Facing the Future: Memory as an Evolved System for Planning Future Acts.’ Memory & Cognition 38(1), 1322.CrossRefGoogle Scholar
Kolmogorov, A. (1950). Foundations of Probability. London: Chelsea Publishing Company.Google Scholar
Landes, J. and Williamson, J. (2013). ‘Objective Bayesianism and the Maximum Entropy Principle.’ Entropy 15, 3528–91.CrossRefGoogle Scholar
Leitgeb, H. (2014). ‘The Stability Theory of Belief.’ Philosophical Review 123(2), 131–71.CrossRefGoogle Scholar
Leitgeb, H. and Pettigrew, R. (2010). ‘An Objective Justification of Bayesianism II: The Consequences of Minimizing Inaccuracy.’ Philosophy of Science 77(2), 236–72.CrossRefGoogle Scholar
Littlejohn, C. (2015). ‘Who Cares What You Accurately Believe?’ Philosophical Perspectives 29(1), 217–48.CrossRefGoogle Scholar
Lord, E. (2020). ‘Suspension of Judgment, Rationality's Competition, and the Reach of the Epistemic.’ In Schmidt, S. and Ernst, G. (eds), The Ethics of Belief and Beyond: Understanding Mental Normativity, pp. 126–45. London: Routledge.CrossRefGoogle Scholar
Michaelian, K. (2011). ‘The Epistemology of Forgetting.’ Erkenntnis 74(3), 399424.CrossRefGoogle Scholar
Nadarevic, L. and Erdfelder, E. (2013). ‘Spinoza's Error: Memory for Truth and Falsity.’ Memory and Cognition 41(2), 176–86.CrossRefGoogle ScholarPubMed
Oaksford, M. and Chater, N. (2007). Bayesian Rationality: The Probabilistic Approach to Human Reasoning. Oxford: Oxford University Press.CrossRefGoogle Scholar
Olsson, E. (2011). ‘A Simulation Approach to Veritistic Social Epistemology.’ Episteme 8(2), 127–43.CrossRefGoogle Scholar
Pettigrew, R. (2016 a). Accuracy and the Laws of Credence. Oxford: Oxford University Press.CrossRefGoogle Scholar
Pettigrew, R. (2016 b). 'Accuracy, Risk, and the Principle of Indifference.' Philosophy and Phenomenological Research 92(1), 3559.CrossRefGoogle Scholar
Pettigrew, R. (2019 a). ‘Epistemic Utility Arguments for Probabilism.’ In Zalta, E.N. (ed.), Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/win2019/entries/epistemic-utility/.Google Scholar
Pettigrew, R. (2019 b). ‘Veritism, Epistemic Risk, and the Swamping Problem.’ Australasian Journal of Philosophy 97(4), 761–74.CrossRefGoogle Scholar
Raleigh, T. (2019). ‘Suspending is Believing.’ Synthese. doi: 10.1007/s11229-019-02223-8.CrossRefGoogle Scholar
Richter, T., Schroeder, S. and Wöhrmann, B. (2009). ‘You Don't Have to Believe Everything you Read: Background Knowledge Permits Fast and Efficient Validation of Information.’ Journal of Personality and Social Psychology 96(3), 538–58.CrossRefGoogle ScholarPubMed
Schooler, L. and Hertwig, R. (2005). ‘How Forgetting Aids Heuristic Inference.’ Psychological Review 112(3), 610–28.CrossRefGoogle ScholarPubMed
Empiricus, Sextus (2000). Outlines of Scepticism (J. Annas and J. Barnes, eds). Cambridge: Cambridge University Press.Google Scholar
Sorensen, R. (1987). ‘Anti-expertise, Instability, and Rational Choice.’ Australasian Journal of Philosophy 65(3), 301–15.CrossRefGoogle Scholar
Sturgeon, S. (2008). ‘Reason and the Grain of Belief.’ Noûs 42(1), 139–65.CrossRefGoogle Scholar
Sturgeon, S. (2010). ‘Confidence and Coarse-grained Attitudes.’ In Gendler, T. and Hawthorne, J. (eds), Oxford Studies in Epistemology, Vol. 3, pp. 126–49. Oxford: Oxford University Press.Google Scholar
Talbott, W. (1991). ‘Two Principles of Bayesian Epistemology.’ Philosophical Studies 62(2), 135–50.CrossRefGoogle Scholar
Trpin, B. and Pellert, M. (2019). ‘Inference to the Best Explanation in Uncertain Evidential Situations.’ British Journal for the Philosophy of Science 70(4), 9771001.CrossRefGoogle Scholar
van Rooij, I. (2008). ‘The Tractable Cognition Thesis.’ Cognitive Science 32(6), 939–84.CrossRefGoogle ScholarPubMed
Vineberg, S. (2016). ‘Dutch Book Arguments.’ In Zalta, E.N. (ed.), Stanford Encyclopedia of Philosophy (Spring Edition). https://plato.stanford.edu/archives/spr2016/entries/dutch-book/.Google Scholar
Vogt, K. (2018). ‘Ancient Skepticism.’ In Zalta, E.N. (ed.), Stanford Encyclopedia of Philosophy (Fall Edition). https://plato.stanford.edu/archives/fall2018/entries/skepticism-ancient/.Google Scholar
Wedgwood, R. (2002). ‘The Aim of Belief.’ Philosophical Perspectives 16, 267–97.Google Scholar