Skip to content

Does God Know What We Would Freely Do?

Keith DeRose
9/2001

One of the hottest debates in the philosophy of religion in the last quarter century or so concerns whether God possesses so-called “middle knowledge” of what creatures would freely do in various circumstances in which they possess libertarian freedom.(1) In addition to being very lively, this debate also has been conducted on a very high plane. Nevertheless, I believe, and will here argue, that it has been infected by a fundamental confusion at its very core. After exposing the confusion in sections x-x below, I will, in what remains, take some steps toward a view of what the debate looks like once it is put on the proper path.

What would middle knowledge be? One thing is clear: It would be God’s knowledge of certain conditionals. But which conditionals? Here we get two different answers, which the parties on both sides of the dispute have assumed amount to the same thing. But do they?

1. The First Construal: Middle Knowledge as “Molinistically Useful”

Suppose you are God, and you want to create a primo world. After creating lots of great stuff, you discern in your infinite wisdom that what your world really needs, to top it all off, is a free creature performing just one good action with libertarian freedom. (Of course, it’s more realistic to suppose that if one libertarian free action is good, then a really primo world would have many such actions, but we’ll keep the issue simple by supposing that you desire only one such good action.) So you create such a creature — let’s call her Eve — and put her in one of the exactly right possible situations (we’ll call this situation S1) where she’s free to choose between two courses of action, one good and one bad, and where her performing the good action is exactly what’s needed for the perfect completion of your world. If she performs the bad action, however, that ruins everything: You could have had a better world by not having any free creatures at all than you get with Eve messing up. Since Eve must exercise libertarian freedom for you to get the primo world you want, you cannot cause her to do the right action, nor can you set off a series of causes that will causally determine her to do the right thing, since either of these two courses of action are inconsistent with Eve’s exercising libertarian freedom.(2)

What’s a God to do? Is there any way of getting the primo world you desire without taking any real risk that everything will get messed up?

Perhaps you can use your knowledge, your Divine omniscience, to avoid any real risk. Though it’s a bit controversial, it’s fairly widely agreed that you cannot use any simple foreknowledge you might have of such propositions as Eve will sin (in S1). For suppose that Eve in fact will sin in S1, and you foreknow this. The supposition that you use this foreknowledge to avoid the trouble your world is headed for is itself headed for trouble. For if you then decide to put Eve in some other situation, say S2, where she’ll fare better, or to put some other, more favorably disposed, possible free creature into S1, or if you decide to skip the whole free creature idea altogether and make do with a pretty good, though not primo, completely deterministic world,(3) then, though it look as if you’ve thereby avoided the trouble, it also looks like you didn’t know that Eve would sin after all, since it turns out not to be true that Eve sins, and you cannot have known what wasn’t true.

So, as it seems to most who study the issue, the knowledge that God at least arguably might have that could be used to avoid any real dice-throwing risks in an indeterministic world is not simple foreknowledge of such propositions as Eve will sin in S1, but is rather “middle knowledge” of certain conditionals. What would really help you in your predicament is knowledge of something like If Eve is put into S1, she will sin. Suppose you know that conditional. Then you’ll know not to put Eve into S1, and the supposition that you so use this middle knowledge to avoid trouble does not itself lead to the trouble that we hit when we assumed you used simple foreknowledge to the same effect. For if there is some other situation S2 that is such that you foresee that Eve will do the right thing if she’s put into S2, and you therefore put her into S2 rather than S1 (or if you create some other possible free creature, or none at all), and you thereby avoid trouble, we can still consistently claim that you knew the helpful conditional, If Eve is put into S1, she will sin.

So our first — and certainly the more important — construal of middle knowledge is as the knowledge of conditionals the having of which would be helpful to God in exercising providential control to get what He wants with no risks at all in situations of indeterminism — or, as we’ll put it for short, it’s the knowledge the having of which would allow God to exercise “Molinistic” control.(4) Deniers of middle knowledge, on this construal, claim that God cannot have His indeterministic cake and eat His absolutely-no-risks providential control, too. Their “Molinist” opponents claim that God can and does have this remarkable knowledge.

One clarification about Molinistic control should be made. Molinists don’t hold that God can get just anything He wants, even if He limits His wants to what is metaphysically possible. Which of the metaphysically possible situations God can make actual is limited by the facts about what creatures would freely do in various situations — and also, perhaps by other indeterministic matters, if any there be. If the truth of the matter is that Eve will sin if she’s put into S1, then even though there are possible worlds in which Eve is put into S1 but doesn’t sin, such worlds are, as it’s sometimes put, unrealizable for God. God’s menu is limited to the realizable worlds, but having Molinistic control means that God knows precisely which options are on His menu, and can get any item on that menu without having to take any risk of getting some other, perhaps less desirable, item. In your decision regarding Eve, even where we supposed that you had the knowledge to exercise Molinistic control, you still had to forego your top choice of the world in which Eve freely does right in S1 when we supposed that what you knew was that she would sin if put into S1. You didn’t get what you most wanted. Still, from among the realizable worlds, you could get the one you most wanted while taking no risk at all. If Eve’s doing right in S2 was also a very desirable outcome, and if that situation was realizable — i.e., if what you knew regarding S2 was that Eve would not sin in it — then you could get that great alternative while taking no risk. And the outcome you gave up on — Eve doing right in S1 — is an outcome you knew that you couldn’t get. By contrast, if you do not have the knowledge needed to exercise Molinistic control — here, if you don’t know whether Eve will or rather won’t sin if she’s put into S1 — you face a dilemma that a God possessing Molinistic control would never have to wrestle with. You must either take a real risk by putting Eve into S1, or else keep her out of S1 and thereby forego your chance at the primo world you want even though you don’t know you couldn’t have gotten that world by taking a risk. What’s more, you won’t be able to avoid risks by putting Eve instead into S2, or by putting some other creature into S1. To get any world in which a creature performs an action with libertarian freedom, you will have to take some risk that she will instead not perform the action. By contrast, if you have Molinistic control over the world, then unless you are extremely unlucky — unless, that is, all the facts about what creatures will freely do if put into various situations go against what you want — you should be able to get an excellent world full of creatures freely choosing the way you want, without taking any risk at all.

2. The Second Construal of Middle Knowledge — by Examples — and a Natural Assumption

But if you ask the parties to the middle knowledge dispute what middle knowledge would be, you’ll often get a different answer: You’ll be given examples of conditionals the knowing of which would constitute middle knowledge. The ur-examples of conditionals served up for this purpose are Alvin Plantinga’s

(A) If Curley had been offered $20,000, he would have accepted the bribe(5)

and Robert Adams’s

(B) If President Kennedy had not been shot, he would have bombed North Viet Nam.(6)

Now, these are notably different in form from the conditional I quite naturally reached for in the previous section as the conditional the knowledge of which would be providentially useful,

(C) If Eve is put into situation S1, she will sin.

A couple of differences leap right out. First, my (C) employs some philosopher-speak in its reference to a “situation S1,” while (A) and (B), by avoiding such formalism, have the advantage of being sentences normal speakers of English might actually use. In this case, a little philosopher-speak is justified, for in speaking of the mysterious “situation S1,” we may suppose — as I do suppose — that this is an exactly specified situation. There are presumably many significantly different situations that President Kennedy may have faced had he not been shot, and whether or not he would have bombed North Viet Nam may have crucially depended on the exact nature of the situation he faced as he made his choice. (And similar points would hold for Curley.) Deniers of middle knowledge don’t mean to be merely claiming that neither (B) nor its complement(7)

(Bc) If President Kennedy had not been shot, he would not have bombed North Viet Nam

is true merely because there is no determinate fact of the matter as to exactly which situation Kennedy would have faced had he not been shot, and he would have bombed in some of the situations he might have faced, and not in others. They rather mean to be making the stronger claim that, with respect to the various particular situations which Kennedy would or might have faced had he not been shot, there’s no truth of the matter as to what he would have done in each of those situations, supposing these are situations in which he possesses libertarian freedom with respect to the action in question. Henceforth, then, to make our discussion relevant to the real debate over middle knowledge, we will pretend that the antecedents of the conditionals we’re dealing with exactly specify situations that free agents might face. We will, then, pretend that (A) and (B) do not vaguely refer to innumerably many significantly different situations Kennedy or Curley might have faced, but rather specify precise situations in their antecedents, as (C) was designed to do.

Still, there’s another glaring difference: (C) is future-directed, while (A) and (B) are backward-looking. To get conditionals like my (C), we’d have to change the ur-examples to

(Af) If Curley is offered $20,000, he will accept the bribe
and

(Bf) If President Kennedy is not shot, he will bomb North Viet Nam.

Now, because these sentences are about possible events that are now, as we speak of them, in our past, it’s awkward to change the examples to the future-directed (Af) and (Bf). But, if we’re trying to discuss conditionals the knowledge of which would actually be useful to God in exercising providential control, the change is surely one for the better. For if God was to exercise Molinistic control over the relevant events, wouldn’t He have had to have known what He would have put then — if He spoke English! — in terms of (Af) and (Bf) before deciding whether or not to allow Curley and Kennedy to be put in the relevant situations? Rather than being useful for exercising Molinistic control, (A) and (B) look like they’re in place only when it’s too late to avoid disaster. They appear to be useless examples of “Monday morning quarterbacking,” as E.W. Adams aptly put it (back in the days when quarterbacks called their own plays).(8) To initial appearances, despite their status as the ur-examples of what middle knowledge would be, (A) and (B) appear to be even so much as relevant to Molinistic control only insofar as they are somehow past-tense versions of (Af) and (Bf) — insofar as their relation to (Af) and (Bf) is that they are true now (when it’s too late), if and only if (Af) and (Bf) were true beforehand. Indeed, the whole middle knowledge debate, on both sides of the issue, seems to be based on some unspoken assumption that some such relation does hold between such conditionals as (A) and (B), on the one hand, and (Af) and (Bf) on the other. That is the fundamental assumption of the current middle knowledge debate.

And there is some intuitive basis for thinking such a relation holds. Suppose we are trying to decide how large a bribe to offer Curley. One topic of discussion is whether (Af) is true: whether he’ll accept if he is offered $20,000. I feel certain he will accept that amount, but you have your doubts, so we instead offer $30,000, and Curley accepts. If we then, in a fit of Monday morning quarterbacking, resume our argument, now in terms of whether (A) is true — whether Curley would have accepted if we had offered him $20,000 — it can certainly seem that we are now taking up in the past tense the same old debate that was phrased in terms of (Af) beforehand. Indeed, how else, other than by updating our phrasing to (A), could we take up that old debate that was put beforehand in terms of (Af)? The assumption that (A) is the past-tense version of (Af) is natural enough that it can certainly be part of the explanation for why the middle knowledge debate focused on such conditionals as (A), yet was thought to be relevant to the possibility of Molinistic control.

3. The Fundamental Assumption and Dualism about Conditionals

But there are plenty of grounds for doubting whether this natural assumption is true, having mainly to do with dualism about conditionals. While there are some border disputes — disagreements about where to place certain controversial examples — and while there are some big differences over how the conditionals in each camp should be treated, there is a surprising degree of agreement among those who work in the area that conditionals fall into at least two camps, which differ quite significantly from one another in their meaning. Indeed, some exceptions aside, most are inclined to give radically different semantic accounts of the two types of conditionals. The ur-examples used to illustrate the difference between the two types are E.W. Adams’s

(D) If Oswald hadn’t shot Kennedy, someone else would have

and

(E) If Oswald didn’t shoot Kennedy, someone else did(9)

Though none of terminology applied to the camps is entirely happy, conditionals like (E) are typically called “indicative” conditionals, while those like (D) are often called “subjunctive,” as I’ll call them here, or “counterfactual” conditionals.

Subjunctive conditionals like (D) appear to be at least fairly well-behaved. This good behavior is reflected in the fact that the two leading theories their semantics, those of David Lewis and Robert Stalnaker, are close enough to one another that for many purposes, they are treated as one theory — the “possible worlds” semantics, or, as it’s sometimes even put, the “Stalnaker/Lewis”, or (depending on the preferences of the person you’re listening to) the “Lewis/Stalnaker” approach. Using “closest A-world” to mean the possible world from among those worlds in which A is true that most closely resembles the actual world, and using “[]–>” as the symbol for the subjunctive conditional, “if…, then…”, the basic idea of the theories are this. A subjunctive conditional like (D) is taken to be built up from the same antecedent and consequent as is the corresponding indicative conditional — in this case, (E). Thus, despite the differences in their wording, (D), like (E), is taken to have “Oswald didn’t shoot Kennedy” as its antecedent and “Someone else did [shoot Kennedy]” as its consequent: (D) and (E) are taken to be different ways of combining that same antecedent and that same consequent into conditional statements. For Lewis, “A []–> C” is true iff C is true in all of the closest A-worlds, while for Stalnaker, “A []–> C” is true iff C is true in the closest A-world. (Differences between the theories arise in cases in which there is a tie among various worlds in which A is true for the title, “closest A-world.”)

The usual examples of the objects of middle knowledge — (A) and (B) — are backward-looking subjunctive conditionals like (D), and it is assumed in the middle knowledge debate that middle knowledge would be knowledge of such subjunctives. Many participants on both sides of the middle knowledge debate, including both Plantinga and Adams, often apply the standard account of the semantics of subjunctive conditionals in discussing the issues involved. Thus, the literature on middle knowledge has been littered with much talk of possible worlds. (Adams, though, does register a problem with applying this semantics to God’s use of His supposed middle knowledge in deliberation; see Adams, pp. 118-120*.)

By contrast, “indicative” conditionals like (E) appear to be very ill-behaved. We’ll look at one of the ways in which they’re unruly in section 6, below, where we’ll use this poor behavior as a sign that we have a conditional of this type on our hands. This misbehavior has manifested itself in a variety of hugely different semantics for such conditionals, with little by way of consensus emerging about which from among the enormously different options is the right approach. Most of these approaches have nothing to do with possible worlds.(10) In assuming that middle knowledge would be knowledge of subjunctive conditionals, those working on the problem of middle knowledge have been assuming that middle knowledge has nothing to do with these troublesome indicative conditionals.

For better or for worse, however, since there’s nothing very subjunctive about them, our Molinistically useful conditionals, (Af), (Bf), and (C), look like they’re of this unruly, “indicative” type. If these Molinistically useful conditionals and the usual examples of the objects of middle knowledge really are on opposite sides of the great conditional divide, then there is little hope for the correctness of the fundamental assumption on which the middle knowledge debate has rested — that the conditionals like (A), on which the debate has focused, are past-tense versions of conditionals like (Af), which appear to be Molinistically useful. For despite the differences in how to treat indicatives, it’s quite clear that no account on which they could bear the needed relation to subjunctives could be right, and nobody I know of is seriously pursuing such a theory.

However, we should not be too quick to judge this matter. (A) and (B) are indeed close enough in the relevant ways to (D) that they can be safely classified with it in the “subjunctive” camp; they are paradigmatic subjunctives. But (Af), (Bf), and (C) are not paradigmatically anything. While they are very different from (D), they are also not really like (E), either. Our paradigms of the two camps, (D) and (E), are both past-directed, and future-directed conditionals should be classified with care. As I noted at the start of this section, there have been border disputes in the classification of conditionals, and, indeed, the disputes have centered on future-directed conditionals much like (Af), (Bf), and (C).(11) Below we will approach this matter of classification with greater care.

4. Future-Directed “Were”/”Would” Conditionals and Deliberation

But first we need to get some other conditionals out on the table. Perhaps in an effort to make the conditionals being discussed appear to be somehow subjunctive in character while at the same time potentially useful in the exercise of Molinistic control, some participants in the middle knowledge debate, have reached for conditionals that are future-directed like (Af), (Bf), and (C), but which have “were”s and “would”s thrown in at appropriate places. In a very interesting portion of “Middle Knowledge and the Problem of Evil,” Robert Adams writes of “a type of subjunctive conditionals that we may call deliberative conditionals.” He explains, “They ought not, in strictness, to be called counterfactual. For in asserting one of them one does not commit oneself to the falsity of its antecedent. That is because a deliberative conditional is asserted (or entertained) in a context of deliberation about whether to (try to) make its antecedent true or false.”(12) So here Adams is being very sensitive to keeping the conditionals he’s dealing with useful in God’s providential deliberations. (Unfortunately, Adams does not adequately address the relation between these “deliberative conditionals” and the examples like our (B), which is taken from Adams’s same paper, and which form the basis of much of Adams’s discussion of middle knowledge.) Adams gives two examples of such “deliberative conditionals.” First, we are given:

(Fw) If God created Adam and Eve, there would be more moral good than moral evil in the history of the world,(13)

which, Adams explains, is the kind of conditional the knowledge of which God might find handy in deciding whether or not to create Adam and Eve. (F), I take it, is to be understood as future-directed, and being entertained before God has created Adam and Eve. Adams’s other example of a deliberative conditional is the schematic,

(Gw) If I did x, y would happen.

(Gw) is in the first person, and looks like precisely the kind of thought one is likely to entertain in deciding whether to do x. But, I gather from example (Fw) that deliberative conditionals don’t have to be in the first person. Both (Fw) and (Gw) look like they would be useful in deliberation. And, because they each have a “would” in their consequents, they look more like (D), our paradigmatic subjunctive conditional, than do (Af), (Bf), and (C). And, indeed, in the quotation above, Adams explicitly states that these are to be understood as subjunctive conditionals.

Going beyond Adams, note that we can make these conditionals look even more subjunctive in character, while retaining their appearance of usefulness in deliberation, by placing a “were” into their antecedents. Thus, (Gw) can be turned into

(Gww) If I were to do x, y would happen

Isn’t (Gww) precisely the conditional you would consider in deliberation over whether to do x? And it looks quite subjunctive in character. Indeed, that such subjunctives or counterfactuals are what’s helpful in deliberation is often assumed in decision theory. To give just one example, here are the opening words of Allan Gibbard and William L. Harper’s “Counterfactuals and Two Kinds of Expected Utility”(14):

We begin with a rough theory of rational decision-making. In the first place, rational decision-making involves conditional propositions: when a person weighs a major decision, it is rational for him to ask, for each act he considers, what would happen if he performed that act. It is rational, then, for him to consider propositions of the form ‘If I were to do a, then c would happen’. Such a proposition we shall call a counterfactual. (p. 153)

But, on the other hand, what of

(G) If I do x, y will happen,

which is bereft of all “were”s and “would”s, and so doesn’t look at all subjunctive? Isn’t (G) the conditional that would be useful in deliberation — for us and for God? But if both (G) and (Gww) are useful, what’s the relation between them?

Going back to your predicament, in the role of God, over whether to put Eve into situation S1, recall that the natural conditional to reach for as the thing you would need to know to exercise Molinistic control is our good ol’

(C) If Eve is put into situation S1, she will sin,

If you’re considering whether or not to put Eve into S1, and if one of (C) or its complement,

(Cc) If Eve is put into situation S1, she will not sin,

is true, and if you know which one is true, you’re all set to exercise Molinistic control. If it’s (Cc) that’s true, you know to put Eve into S1, and that, by doing so, you will certainly achieve the primo world you desire. If it’s rather (C) that you know to be true, you know to keep Eve out of S1 to avoid the disaster. If you don’t know either that (C) is true or that (Cc) is true, you are not in a position to exercise Molinistic control, and must either take a real risk by putting Eve into S1, or else keep her out of S1 and thereby forego your chance at the primo world you want even though you don’t know you couldn’t have gotten that world by taking a risk. So, the issue of whether you possess Molinistically useful knowledge seems to ride entirely on whether you know such conditionals as (C). That such indicative looking conditionals seem to be Molinistically useful is the basis of our challenge to the common assumption that it’s subjunctive conditionals that God would need to know to exercise Molinistic control.

But the gambit of inserting “were”s and “would”s into future-directed conditionals can be brought to the defense of the common assumption. For couldn’t what we just said about (C) be said, with about equal plausibility, of

(Cww) If I were to put Eve into situation S1, she would sin,

which looks very subjunctive in character? If you know either that (Cww) is true or that its complement,
(Cwwc) If I were to put Eve into situation S1, she would not sin,
is true, aren’t you in the same way prepared to exercise Molinistic control? And aren’t you in the same predicament of having to take a risk to get what you want if you don’t know whether Eve would sin or not if you were to put her into S1 — i.e., if you don’t know either (Cww) or (Cwwc) to be true? Whether you have the knowledge needed for Molinistic control appears to ride entirely on whether you know such conditionals as (Cww), which because of its “were” and “would”, at least on the surface, looks like a good candidate for being grouped into (D)’s, rather than (E)’s camp.

As I’ve noted, this, of course, provides some hope for the common assumption that it is subjunctive conditionals that are Molinistically useful. In fact, two distinct possibilities on which the assumption is correct are suggested.

It seems that God either has the knowledge needed to exercise Molinistic control, or He does not. Given that, and assuming that (C) and (Cww) are both Molinitically useful, we are led to the conclusion that (C) and (Cww) are in some sense equivalent. Let’s say that they are “Divinely equivalent” iff God knows one iff God knows the other.(15) If (C) and (Cww) are Divinely equivalent, it seems that they cannot be on opposite sides of the great conditional divide, and at least some surface appearances regarding how to classify conditionals are leading us astray: either (C), despite its appearance, belongs with the subjunctives, or else (Cww), despite its appearance, belongs with the indicatives — or perhaps future-directed conditionals don’t go into either category, but rather form their own. On the first possibility the common assumption underlying the middle knowledge debate proves to be correct. In that case, I am right to think that conditionals like (C) — and, I suppose, (Af), and (Bf), as well — are what God would need to know in order to exercise Molinistic control, but wrong to think that these conditionals should be grouped with (E) in the camp of the “indicatives”. They go rather with (D) in the “subjunctives”, and the application of the semantics of subjunctives to the middle knowledge debate is entirely appropriate.

If, on the other hand, (C) and (Cww) are not Divinely equivalent, then one set of appearances about which conditionals would be Molinistically useful to know are misleading us: despite the fact that both the likes of (C) and the likes of (Cww) appear to be Molinistically useful, one of them is not. The defender of the common assumption can hold that it’s the appearance that (C) would be useful that’s misleading.

Of course, there are mirror possibilities that are hostile to the common assumption. If (C) and (Cww) are Divinely equivalent, they might both belong with the indicatives. In fact, that’s the conclusion I will defend in the next section. Or, as I’ve already noted, they might belong with neither. And if (C) and (Cww) are not Divinely equivalent, it could be (C) that’s Molinistically useful, while it’s (Cww) that only misleadingly appears to be useful.

So some surface appearance, either regarding how conditionals should be classified, or regarding which conditionals would be Molinistically useful, are misleading us. In the next section, we’ll try to discern where appearances go wrong. But before we embark on that task, it’s worth noting that our initial suspicion certainly should be that we are somehow being misled about how to classify conditionals. That both the likes of (C) and the likes of (Cww) would be useful is a strong intuition here that anyone with a grasp of how to competently use conditionals should sense. Of course, sometimes even strong intuitions can run afoul of one another, and some have to be given up. But in this case, what stands in the way of accepting that both (C) and (Cww) would be useful are “intuitions” about how to classify conditionals. That’s no contest, it seems to me. We can all sense that there’s some important difference between (D) and (E), but how exactly to divide conditionals into camps, especially when we expand our coverage to include forward-directed conditionals, are matters about which we should be very cautious. We shouldn’t be shocked to find that we’ve misclassified (C) or (Cww). In neither case will we be overturning anything close to a strong intuition. Here it’s important to point out that we should not get hung up on the names that have been given to the two types of conditionals. Our question is not over the moods of the verbs in (C) and (Cww), but rather concerns whether their meaning is such as to make it appropriate to group them for semantic treatment with (D), with (E), or with neither.

5. Classifying Our Future-Directed Conditionals: Assertability Conditions
As I’ve intimated, the classification of future-directed conditionals has been the subject of much recent controversy.(16) I will not attempt to present all the considerations that have been advanced — many of which, in my judgment have been missteps. There are three tests that seem to me to be good tests for classifying conditionals with the indicatives. One of these tests, however, is not easily applicable, without begging important questions, to future-directed conditionals of the type we’re here interested in. I will proceed, then, by presenting the other two tests, noting that they yield the correct result when applied to our paradigms of the types of conditionals — (D) and (E) — and applying the test to our future-directed conditionals.

One of the most commonly used type of tests for classifying conditionals is that, in various ways, the acceptability or assertability of an indicative conditional goes by the conditional probability of its consequent on its antecedent. There is a variety of closely related tests of this type; we will use a version which depends on the assertability conditions of conditionals. There are many reasons why an assertion might be inappropriate: It may be rude, it may be irrelevant to the conversation in progress, it may be made in a library where one is supposed to be silent, etc. But, in using the conditions under which it is appropriate to assert a sentence as a clue to the sentence’s meaning, it seems best to ignore such matters, and focus on just one aspect of appropriate assertion: whether one is well-enough positioned with respect to the thing asserted to appropriately assert it. When we say that a sentence is “assertable”, then, we will mean only that it is appropriate to assert with respect to that one aspect of assertability, whether or not it might be rude, irrelevant, said in a library, etc. We will use David Lewis’s version of the data about when indicative conditionals are assertable, in this sense, as a test for whether a conditional belongs with the indicatives. According to Lewis, in general, “Assertability goes by subjective probability”; it is “permissible to assert that A only if P(A) is sufficiently close to 1.” But, according to Lewis, things are different in the case of ordinary indicative conditionals; there “assertability goes instead by the conditional subjective probability of the consequent, given the antecedent,” so that it is “permissible to assert the indicative conditional that if A, then C (for short A –> C) only if P(C/A) is sufficiently close to one.”(17) I have some qualms here,(18) but I think this account is close enough to being right for our present purposes.

Note how this account applies to our paradigms. The assertability of (E) does seem to go by the relevant conditional probability: It seems that one is in a position to assert (E) when the probability of someone else’s having shot Kennedy, given that Oswald didn’t shoot him, is very high. For most of us, that conditional probability is very high, and we do find (E) quite assertable. However, despite that same high conditional probability, one can, and most do, find (D) quite unassertable. Our test is that if the assertability of a conditional goes by the conditional probability of its consequent on its antecedent in the way Lewis specifies, then it should be grouped with the indicatives. This test gives the right result when applied to our paradigms: (E) passes, while (D) fails, this test for being classified with the indicatives.

Applying our test to the conditionals we’re here interested in, I submit that (Af), (Bf), and (C), which appear to be grammatically indicative, also seem, at least according to this test, to belong with (E) semantically: In each case, it seems assertability goes by the relevant conditional probability. Moreover, in deliberation, (Cww) also appears to be assertable where the conditional probability of Eve’s sinning if you put her into S1 is very close to 1.(19) Likewise, in deliberation, the following instances of (Gww) appear to have the assertion conditions characteristic of indicative conditionals:

(Aww) If I were to offer Curley $20,000, he would accept the bribe

(Bww) If I were to not shoot President Kennedy, he would bomb North Viet Nam

In each case, as I hope you will agree, the assertability of the conditional seems to go by the conditional probability of the consequent on the antecedent. So even our future-directed “were” / “would” conditionals that don’t seem grammatically indicative do seem to belong with the indicatives semantically.

6. Classifying Our Future-Directed Conditionals: The Paradox of Indicative Conditionals

“Indicative” conditionals like (E) display a truly remarkable property: They are subject to what Frank Jackson has dubbed(20) the “Paradox of Indicative Conditionals.” (Being subject to such a paradox is one of the chief ways that indicative conditionals are ill-behaved.) It’s widely recognized that indicatives like (E) have this property. I’m not aware of anyone using the presence of this property a classifying device, but it seems a good device, and a nice complement to the test we used in the previous section. There we used the conditions under which the sentences in question seem assertable. Another genus of semantic markers are what inferences involving a sentence are — or at least seem to be — valid. Our new test is of this second variety.

Before Jackson gave it the above name, the Paradox of Indicative Conditionals was nicely set up by Robert Stalnaker,(21) using as his example the paradigmatically “Indicative” conditional,

(H) If the Butler didn’t do it, the gardener did.

The Paradox consists in two apparent facts about (H); it is a remarkable paradox in that these apparent facts are quite simple, and the intuitions — at least one of which must be wrong! — that they are indeed facts are each intuitively quite powerful. First, (H) seems to be entailed by the disjunction,

(I) Either the butler did it, or the gardener did it.

If someone were to reason,

(I H) Either the butler did it or the gardener did it. Therefore, if the Butler didn’t do it, the gardener did,

they would certainly seem to be performing a perfectly valid inference. However, the strong intuition that (I H) is valid clashes with a second strong intuition, namely, that (H) is not entailed by (I)’s first disjunct,

(J) The butler did it.

The reasoning,

(J H) The butler did it. Therefore, if the Butler didn’t do it, the gardener did,

so far from being valid, appears to be just crazy. (Only a philosopher, dazed by over-exposure to ‘s, would actually reason in that way.) But at least one of these strong intuitions — that (I H) is valid or that (J H) is invalid — must be wrong. Given that (J) entails (I), and given the transitivity of entailment, it just can’t be that (H) is entailed by the “weaker” (I) but fails to be entailed by the “stronger” (J).

This suggests a test: If a conditional, A –> C, has the remarkable property of being subject to the “Paradox of Indicative Conditionals” — that is, if it gives the strong appearance of being entailed by ~A or C but also seems not to be entailed by ~A — then it should be classified with the Indicatives. Note that we are using highly suspect intuitions in applying this test, but also that we are not in any way relying on our intuitions being correct. Indeed, whenever a conditional does elicit the two intuitions that indicate it should be classified with the “Indicatives,” we know that at least one of those intuitions must be wrong.(22) We are using how inferences involving conditionals strike us as a classifying device, even where we know that at least some of the intuitions are misleading.

Applying this test to the ur-examples of the types of conditionals, we find that the test works here. For the “indicative” (E) is subject to the Paradox, while the “subjunctive” (D) is not. (E) does indeed seem to be entailed by

(K) Either Oswald shot Kennedy, or someone else did.

Again, the reasoning,

(K E) Either Oswald shot Kennedy, or someone else did. Therefore, if Oswald didn’t shoot Kennedy, someone else did,

while not exciting, certainly gives a very strong appearance of being valid. But (E) seems not be entailed by (K)’s first disjunct,

(L) Oswald shot Kennedy.

For

(L E) Oswald shot Kennedy. Therefore, if Oswald didn’t shoot Kennedy, someone else did

seems about as crazy as (J H). On the other hand, as we would expect, the “subjunctive” conditional (D) is not subject to the paradox, for (D) does not seem to be entailed by (K); the inference,
(K D) Either Oswald shot Kennedy, or someone else did. Therefore, if Oswald hadn’t shot Kennedy, someone else would have,
in contrast to (K E), does not seem valid.

When we apply this test to our future-directed conditionals, we find that our natural examples of useful conditionals do display the same remarkable property of being subject to the Paradox. (Af) does indeed seem to be entailed by

(M) Either Curley will not be offered $20,000, or he will accept the bribe,

but (Af) seems not be entailed by

(N) Curley will not be offered $20,000.

That is,

(M Af) Either Curley will not be offered $20,000, or he will accept the bribe. Therefore, if Curley is offered $20,000, he will accept the bribe

seems valid, while

(N Af) Curley will not be offered $20,000. Therefore, if Curley is offered $20,000, he will accept the bribe

seems invalid. So (Af) displays the remarkable property of being subject to the Paradox, as do (Bf) and (C), as the reader can verify.

What of our future-directed “were” / “would” conditionals? That’s not quite as clear, but when they are considered in the context of deliberation, they also seem to be subject to the Paradox. You may not be in a position to know the premises — (M’) and (N’) (they differ from (M) and (N) in that they are in the first person) — of the following inferences while deliberating about whether to offer Curley $20,000. Still, we can intuitively evaluate these inferences for validity. (Parallel point: A bachelor may not be in a position to know whether he will marry while he is deliberating whether to marry. Still, he can recognize as valid the inference, If I marry, I will no longer be a bachelor.) When we do, the Paradox seems to emerge. Half of the test is clearly met, for

(N’ Aww) I will not offer Curley $20,000. Therefore, if I were to offer Curley $20,000, he would accept the bribe

intuitively seems invalid. That’s an easy call. That the above inference actually is invalid is — along with just about everything else in a context in which a paradox is looming about — quite questionable. But that the inference intuitively appears to be invalid is quite clear, and, again, that’s all we’re relying on in applying our test. It’s the other half of our test that is here problematic. But, at least arguably, in the context of deliberation about whether to offer Curley $20,000,

(M’ Aww) Either I will not offer Curley $20,000, or he will accept the bribe. Therefore, if I were to offer Curley $20,000, he would accept the bribe

seems valid; you certainly seem to be in a position to deduce the conclusion, (Aww) if you somehow came to know the premise, (M’). It’s here, though, that intuitions are not as strong as we might like. This, I think, is due to there being contexts other than that of deliberation where inferences like (M’ Aww) seem invalid. We will have to discuss this in section xxx, along with some other matters. But in the context of deliberation, the inference does seem valid.

7. Tentative Conclusions and a Conjecture

Our tests suggest that both (C) and (Cww) function as indicative conditionals. This, in turn, suggests that the further conclusions that both actually are — as they seem to be — Molinistically useful, and that they are Divinely equivalent, so God knows (C) if and only if He knows (Cww).

What then is the difference between them? And why would one use one rather than the other?

My challenge to the orthodox view that it is subjunctives that would be Molinistically useful has focused on future-directed and indicative-looking conditionals like

(C) If Eve is put into situation S1, she will sin,

which give the strong appearance of being Molinistically useful, and yet don’t look in any way subjunctive. It has been suggested to me in defense of the orthodox view(23) that the likes of (C) are really just used “for short” to indicate more subjunctive-looking conditionals, like

(Cww3) If Eve were to be put into situation S1, she would sin,

which are the real objects of middle knowledge — and which semantically belong with the subjunctive conditionals, where possible worlds semantics applies.

It now seems more likely that, instead of being just used “for short” to indicate the “were”/”would” conditionals, our indicative-looking examples like (Af), (Bf), and (C) should be viewed as both real objects of middle knowledge and as belonging in the indicative camp with (E), and that the subjunctive-looking conditionals like (Cww3) are also belong in the indicative camp, as souped-up versions of the straightforwardly indicative conditionals.

What’s the function of the souping up? One suggestion that comes quickly to mind is that one would use (Cww) when one wishes to indicate that it is quite doubtful that Eve will be put into situation S1 or that one is leaning toward not putting her into S1. On this suggestion, the function of using a “were”/”would” future directed conditional like (Cww), rather than a simpler conditional like (C) is to indicate that it’s quite doubtful that the antecedent will be realized, or that one judges this to be doubtful. This suggestion leaps to mind because most of the situations in which one would most naturally use (Cww) rather than (C), where one is the agent involved, are cases in which one is leaning toward not putting Eve into S1, or are even cases in which one has already decided not to put her into the situation, but is explaining one’s reason for this decision, and where one is not the agent involved, are cases in which one judges it doubtful that Eve will be put into S1.

However, I think a more general explanation is needed, since I think there are situations in which (Cww) is appropriately used even though one is leaning, even quite strongly, toward putting Eve into S1, or when one thinks it likely, even highly likely, that Eve will be put into S1. I believe the function of going the “were”/”would” route is to call attention to the possibility that the antecedent of the conditional will not be realized — in this case, to the possibility that Eve will not be put into situation S1. There are many reasons one might have for calling attention to this possibility. One very common reason for calling attention to this possibility is that one judges it — the possibility that the antecedent will not be realized — to be quite likely — i.e., one judges the antecedent of the conditional to be quite unlikely. However, there could be other reasons for calling attention to this possibility. For instance if you are the agent who is deciding whether or not to realize the antecedent of a conditional, and you are leaning toward realizing that antecedent, but haven’t definitely decided, I think you can appropriately use a “were”/”would” construction to indicate that you still haven’t completely ruled out the possibility of not realizing the antecedent. Here, you would be calling attention to the possibility that the antecedent won’t be realized, even though you don’t think it is very likely that it won’t be realized.

8. Some Lessons from Sly Pete: Right but Deliberatively Useless Conditionals

We will now consider two reasons for caution regarding the conclusions we have just tentatively drawn, one reason in this section, and one in the next. Both are nicely illustrated by Allan Gibbard’s tale of the riverboat gambler, Sly Pete, which we will slightly modify for our current purposes.(24)

Our first cause for caution is that indicative conditionals seem to violate the Principle of Conditional Non-Contradiction in ways that can seem to jeopardize their usefulness in deliberation.

Sly Pete is playing a new card game called Risk It! against Gullible Gus.(25) Largely because your henchmen have been hovering about the game and helping him to cheat, the unscrupulous Pete is winning as they go into the final round of the game; he has already won $1,000 from Gus. The final round of this game is quite simple. A deck of 100 cards, numbered 1-100, is brought out, shuffled, and one card is dealt to each of the two players. After each player gets a chance to view his own card, but not his opponent’s, the player who is leading going into the final round — in this case, Pete — gets to decide whether he wants to “take the risk” — to let his winnings be doubled or cut to nothing, depending upon whether he has a higher numbered card than his opponent.

In our first version of the story, though, Pete doesn’t have to take a real risk at all, because your henchman Sigmund (the signaler) has seen what card Gus is holding, has signaled to Pete that it is 83, and has received Pete’s return sign confirming that he got the message, and knows that Gus is holding 83. Sigmund doesn’t know what card Pete is holding, and so doesn’t know which player holds the higher card, but because he knows that Pete knows what both cards are, and because he knows that Pete is not stupid enough to “take the risk” if his own card is the lower one, he knows that, and is able to report to you that, “If Pete takes the risk, he will win.” Such information is helpful to you, because, we may suppose, you are making derivative bets on the results of Pete’s game. Based on your knowledge that Sigmund is reliable and knows what he’s doing, you too know that if Pete takes the risk, he will win.

In our second version of the story, it’s your henchman Snoopy, rather than Sigmund, who reports back to you. Snoopy doesn’t know the signals, so, though he was able to see Gus’s card — which again is 83 — he was not able to help Pete. But Snoopy is able to help you, for he has seen Pete’s card as well as Gus’s. Because Snoopy knows that Pete is holding the lower card — 55, let’s say –, he knows that, and is able to report to you that, “If Pete takes the risk, he will not win.” Here, based on Snoopy’s reliable testimony, you too know that if Pete takes the risk, he will lose.

But wait! What if we combined our two versions of the story into a third version? Pete is indeed holding the lower card, as was specified in version 2, and as was left open in version 1. Sigmund has done his signaling, as in version 1, and Snoopy has done his snooping, as in 2, but each is unaware of what the other has done. As in version 1, Sigmund does know that Pete knows what Gus’s card is, and so has reported to you — quite appropriately, knowingly, and truthfully, it seems — that “If Pete takes the risk, he will win.” As in version 2, Snoopy knows that Pete holds the lower card, and so has reported to you — again, quite appropriately, knowingly, and truthfully, it seems — that “If Pete takes the risk, he will not win.” Are we to suppose that both of these reports are true, and that you know both that Pete will win if he takes the risk and that Pete will not win if he takes the risk? This would appear to be a violation of the Law of Conditional Non-Contradiction — the Law that A –> C and A –> ~C can’t both be true.(26)

There are excellent reasons, roughly of the type that Gibbard gives,(27) for thinking that both reports are true — or at least that neither is false. Because they are competent speakers using the relevant assertions in an appropriate manner, we shouldn’t charge either Sigmund’s or Snoopy’s claim with falsehood unless there’s some relevant fact which they are getting wrong, and their mistake about this relevant fact explains why they are making a false assertion. But neither henchman is making any mistake about any underlying matter. To be sure, each is ignorant about an important fact — Snoopy doesn’t realize that Pete knows what Gus’s card is, and Sigmund doesn’t know that Pete is holding the higher card. But in neither case does this ignorance on the speaker’s part make it plausible to suppose he is making a false claim.

Since for most who hear the story, it’s Sigmund’s report that seems the more likely candidate for being false, let’s work this out in his case. Pete holds the lower card, and Sigmund is unaware of that fact. And it seems a very relevant fact: Anyone (including Sigmund) who comes to know this fact will thereby become very reluctant to say what Sigmund says — that Pete will win if he takes the risk. However, while Sigmund doesn’t know that Pete holds the lower card, he does recognize the substantial possibility that that’s the case. In fact, from Sigmund’s point of view, the probability that Pete’s card is lower than Gus’s is almost .83. (Recall that Sigmund knows that Gus holds card 83, but doesn’t know which of the remaining 99 cards Pete holds.) So, if this fact — that Pete holds the lower card — were enough to make Sigmund’s claim false, then from Sigmund’s own point of view, his claim had a very high probability of being false. But a speaker cannot appropriately make a claim that from his own of view is probably false. But Sigmund does appropriately assert (O). So the fact that Pete holds the lower card must not render Sigmund’s claim false. But then, what does? Nothing — there are no good candidates. Likewise for Snoopy and his ignorance of the fact that Pete knows what Gus’s card is. It’s controversial whether indicative conditionals are truth-evaluable. But if your henchmen’s conditional reports to you are the sort of things that can be true or false, we must conclude that they are both true. (Note that those who hold that indicative conditionals are equivalent to material conditionals will be quite happy with this story, as they reject the Law of Conditional Non-Contradiction anyway. In fact, the reasoning you will perform, if you’re clever enough, upon receiving both henchmen’s reports, is precisely what a material conditional reading of indicative conditionals would indicate: A –> C; A –> ~C; therefore, ~A — Pete will not take the risk!)

And if indicative conditionals are not the sort of things that can be true or false, then we must conclude that both of your henchmen’s reports have whatever good property can be assigned to them in lieu of truth — assertable, as opposed to unassertable; assertable and not based on error, as opposed to unassertable or based on error; probable, as opposed to improbable; acceptable, as opposed to unacceptable; or what not.

But then, how can these conditionals be what’s useful in deliberation? Pete wants to take the risk if, but only if, he will win if he takes the risk. So it looks like it’s precisely the issue of which, if either, of

(O) If Pete takes the risk, he will win

and

(Oc) If Pete takes the risk, he will not win

is, let’s say, right — true, or if not that, then acceptable, or whatnot. However, if it turns out that (O) and (Oc) are both right, then they really are not so useful in guiding deliberation by Pete over whether he should make their shared antecedent true, since they give conflicting advice. (O)’s rightness tells Pete he should take the risk; (Oc)’s rightness tells him not to. Which should he listen to?

Not a hard question, actually: Of course, Pete should listen to Snoopy’s (Oc) and not take the risk. (Oc), not (O), is, we will say, deliberatively useful — it is the one the agent involved should make use of in deliberating over whether to (try to) make the antecedent true. What’s more, as normal, competent speakers, we demonstrate an awareness of the fact that some conditionals, while perhaps useful for other purposes, are not deliberatively useful, for we won’t inform a deliberating agent of such conditionals, even though we will so inform others. Note this crucial difference between Sigmund and Snoopy. Based on his knowledge of what both players’ cards are, Snoopy is in a position to knowingly inform you of (Oc), and, what’s more, if he had a chance to quickly whisper a conditional to Pete as Pete deliberated over whether to take the risk, he could also inform Pete of the truth or rightness of the deliberatively useful (Oc), whispering, “If you take the risk, you’ll win.” Sigmund, on the other hand, while he knows that (O), and is in a position to inform you of (O), cannot inform the deliberating Pete of (O). If Sigmund is a competent speaker, he knows not to tell the deliberating Pete that (O), for he knows that (O) is not deliberatively useful.

Let me emphasize that in saying that (O) is not deliberatively useful, I don’t mean to be saying that it is useless for all deliberations. In our story, (O) is very useful to you as you decide — deliberate about — which derivative bets to place on Pete’s game, and, in keeping with that, Sigmund feels very well-positioned to inform you of (O). In saying that (O) is not deliberatively useful, I mean only that it is not useful for the agent involved in deciding whether to make the antecedent true in order to make the consequent true. Because he can tell that (O) is not in that way deliberatively useful, Sigmund won’t inform the deliberating Pete of (O).

This may tempt us to say that, while we may get a violation of the Law of Conditional Non-contradiction when it comes to the likes of the third person (O) and (Oc), the same cannot happen with 1st- and 2nd-person versions of them; when talking to Pete, the agent involved, only

(O2c) If you take the risk, you will not win,

and not

(O2) If you take the risk, you will win,

can be right, where Pete holds the lower card. And, for Pete himself, only

(O1c) If I take the risk, I will not win,

and not

(O1) If I take the risk, I will win

can be right. But this temptation must be resisted. (O1) can be right for Pete, even where he holds the holds the lower card — as follows, for instance. Pete doesn’t yet know Gus’s card, but, as has been planned, Sigmund has just handed Pete a slip of paper with Gus’s number written on it. (He’s told the unsuspecting Gus that it’s just a phone message from Pete’s wife). After receiving the slip, but before looking at it, (O1) is right for Pete, for the same reason that (O) is right for Sigmund: Pete knows of himself, just as Sigmund knows of him, that he will not take the risk when he knows that he holds the lower card, and he also knows that he will find out whether he holds the lower card before having to decide whether to take the risk. This is enough to make (O1) right for him — and also to make (O1) useful in some deliberations Pete might make: Like you, Pete could find his knowledge of (O1) quite useful in deciding whether to make derivative bets on his own game, if he had to make such derivative bets before he got a chance to read the slip of paper. But (O1), Pete knows, is not deliberatively useful, as we’re here using that phrase: Just as Sigmund knows not to inform the deliberating Pete of (O2), so Pete knows not to use (O1) in the deliberating way — in deciding whether or not to (try to) make its antecedent true in order to make its consequent true.

Many intriguing and baffling questions about indicative conditionals arise from this story, and I will resist addressing them here. Our current concern is whether indicative conditionals can function as the guides to deliberation. The challenge posed by our story is that, because they seem subject to failures of Conditional Non-Contradiction, indicative conditionals might not be fit to serve as such guides. If you want C to be true, but want to avoid ~C’s truth, and are wondering whether to (try to) make A true in order to get your way with C, then failures of Conditional Non-Contradiction pose a problem of conflicting advise: the truth of A –> C advises that you make A true, while the truth of A –> ~C would indicate that you should not make A true. However, the very story which raises the problem also provides an answer to it: One of those two true — or at least “right” — conditionals will be deliberatively useless, and competent speakers, at least when they know the basis of such a deliberatively useless conditional’s rightness, will know that it’s deliberatively useless.(28) So, while indicative conditionals, like Snoopy’s (Oc), often are the proper guides to deliberation (as Snoopy realizes, which is why he’ll inform Pete of (O2c)), not all of these conditionals can be so used. Some of them, like Sigmund’s (O) are deliberatively useless, as Sigmund, being a competent speaker, realizes — which is why he won’t inform the deliberating Pete of (O2).

[Though we seem to have a facility to recognize deliberatively useless indicative conditionals, it would be nice to have a general account of when conditionals have this property. Why is Sigmund’s (O) deliberatively useless, while Snoopy’s (Oc) is useful? It seems to be because the basis of Sigmund’s (O) involves assumptions about what Pete’s reasons would be for taking the action — “If Pete does this, it will be for this reason…” — or, perhaps more generally, because it has a “back-tracking” basis — “If the antecedent is true, that will be because…”. I haven’t been able to find examples of deliberatively useless conditionals that don’t have that general feature.]

7. Section to be written.

Problem: “If Pete were to take the risk, he would win”, for Sigmund (having signaled Pete) does not in its assertability conditions go by the conditional probability of the consequent on the antecedent. That conditional probability is high for Sigmund — which is why he will state “If Pete takes the risk, he will win” — but the “were” / “would” conditional is unassertable for Sigmund. (Likewise, “If I were to take the risk, I would win” is unassertable for Pete, where he’s received the slip of paper from Sigmund, though, having received the paper, the conditional probability of his winning on the assumption that he takes the risk is quite close to 1 for Pete.)

Diagnosis: This occurs because the conditional probability is getting near 1 because of back-tracking reasons. Set aside such cases.

8. Section to be written, reviewing the case that there are two separate problems here — whether God knows the past-directed subjunctives, and whether He knows the Molinistically useful, future-directed conditionals.

-stress the positive grounds, but also make the negative case, pointing out what a groundless item of blind faith the fundamental assumption now appears to be. What’s to be said in its favor?

X-X. Does God Know Past-Directed Subjunctives?

-Be brief — could write a whole book on whether these are true — TRY NOT TO

-Take the semantics seriously. Such a conditional is true if its consequent is true in the closest A-worlds (in the contextually relevant way of measuring closeness or similarity.). Often, it seems, C or not-C will be true in the closest A-worlds, and, it seems, God can know that. Why not? Address 3 main arg’s

1. If we’re free in the relevant cf situation, then we might have, and might not have. But, by duality, then, it’s neither true that we would have nor that we would not have. Depends on duality thesis, which is refuted in my “Can It Be…?” paper in Phil. Perspectives.

2. No ground or basis, nothing to make it true. Use material in the middle knowledge fn in “Can It Be…?”

3. Bob’s argument that it won’t be true soon enough to be useful. But we’re no longer supposing this knowledge has to be useful. We’re here considering whether God knows these useless examples of Monday-morning quarterbacking.

X-X. Does God Possess Molinistically Useful Middle Knowledge?

Thesis: Freddoso, Flint are right that middle knowledge — at least on this construal — stands or falls with simple foreknowledge.

-Intuitive case: Suppose God doesn’t know whether Eve will sin if she’s put into S1. He decides to take the risk. Nothing else changes — Eve isn’t created yet, nothing is created, nothing is done. All that’s new is that God has decided to try Eve in S1. Is it plausible to suppose that He now, just having made that decision, knows that she will sin? No way! Is it incoherent to suppose that, now that He knows that she will sin, He cannot change His mind about that S1 idea? If He knows, simply knows, that she will sin, then there is no turning back. But what’s happened to block turning back?

-Case based on my own preferred — and the correct! — semantics for indicative conditionals. (Would have to give brief summary of reasons for thinking the semantics is independently plausible?) Delivers the intuitive result that middle knowledge stands or falls with simple foreknowledge.

-Bob holds out hope for the possibility of the foreknowledge-yes, middle knowledge-no position. But this is based on there (at least arguably) being a ground for the foreknowledge in the future occurrence of the relevant event, while there is no ground for the conditionals that would be the objects of middle knowledge. (See p. 113*: “Most philosophers …. have supposed that categorical predictions, even about contingent events, can be true by corresponding to the actual occurrence of the event they predict. But propositions (1) and (2) [past-directed subjunctives] are not true in this way. For there never war nor will be an actual besieging of Keilah by Saul.”) But this is based on these conditionals being counter-factuals. Here we are dealing only with Molinistically useful conditionals. At the time (or at the point in the order of explanation) that Molinistic deliberation is taking place, there is no division between the counter-factual and factual (true-antecedent) conditionals. In fact, God is deciding which of them to make which! Molinistically useful conditionals shouldn’t even be called “counterfactuals of freedom”; they’re just conditionals of freedom — conditionals like (C), (Af), and (Bf).

-So the opponents of middle knowledge are right that middle knowledge — at least on our second construal — stands or falls with foreknowledge. But they’ve got no decent reason to think they both stand. [Import e-mail to Fischer about this here.] I, for one, say they both fall. But the lesson here is that, to deny middle knowledge, you really have to go all the way: deny foreknowledge, and then, to preserve omniscience, deny fore-truth. Be an Aristotelian about future contingents!


1. Note on libertarian freedom

2. Expl. of libertarian freedom? — Value, compatibilism, etc.

3. Physics stuff, too

4. Even on Molinism, God cannot necessarily get everything He wants, even if His wants are limited to the domain of the metaphysically possible. For on Molinism, there will be facts, known to God, of the form, If creature C1 is put into situation S1, she will (freely) perform action A1. These, however, are contingent facts. (What makes these contingent facts true? That’s a tough question for Molinists.) There will be possible worlds in which C1 is put into S1, but fails to perform A1. But these worlds, though metaphysically possible, will be unrealizable possible worlds for God. Molinistic control, then, involves God’s certain ability to get exactly the world that He most wants, from among the realizable worlds, while taking no risks.

5. Ref.

6. See Plantinga (1974), p. 174 and Adams (1977), p. 109.

7. I will call (B) and (Bc), and, in general, pairs of conditionals of the forms A –> C and A –> ~C, “complements” of one another. In so doing, I am not assuming that they are contradictories of one another) — that exactly one of them must be true. Nor am I even assuming that they are inconsistent — that at most one of them can be true. (Arguably, in some cases of indicative conditionals, both members of a pair of “complements” can be true.)

8. ?? E.W. Adams (1975), p. 133.

9. See E.W. Adams (1970): “Subjunctive and Indicative Conditionals,” Foundations of Language 5 (1970): 89-94.

10. The exception here is Stalnaker’s account, on which an indicative conditional is true, roughly, where its consequent is true in the closest A-world that is consistent with conversational presuppositions.

11. For the flavor of the history of this dispute, see the opening two pages of Bennett (1995).

12. Adams ref.?

13. This is Adams’s indented proposition (8), “Middle Knowledge and the Problem of Evil,” p. 118*. ??

14. In W.L. Harper, R. Stalnaker, and G. Pearce, ed., Ifs (D. Reidel Publishing Company, 1978): pp. 153-190.

15. Why not just go for ordinary equivalence? Because on some theories, including the one I’m inclined to accept, indicative conditionals are not the kind of thing that can be true or false, and so the notion of equivalence may not apply to them. Nevertheless, I think that one can believe that if A, then C, and can know that if A, then C. The notion of “Divine equivalence” may apply, then, even if ordinary equivalence does not.

16. Refs to Dudman & post-Dudman papers, especially Bennett (1995): “Classifying Conditionals: The Traditional Way is Right,” Mind 104: 331-354.

17. Lewis, “Probabilities of Conditionals and Conditional Probabilities,” p. 76 in Jackson, ed.

Frank Jackson also believes — in Lewis’s words — that in the case of indicative conditionals “assertability goes by the conditional subjective probability of the consequent, given the antecedent,” and proposes a closely related, but different, account of their assertability conditions. According to Jackson, the degree of assertability of A –> C is equal to the conditional probability of C on A: As (A –>C) = P (C/A). Here is one piece of evidence Jackson gives for this thesis: “Or take a conditional with 0.5 assertibility, say, ‘If I toss this fair coin, it will land heads’; the probability of the coin landing heads given it is tossed is 0.5 also.” Both this thesis and the datum cited here to support it seem to me quite wrong; I would be inclined to use the assertability of Jackson’s coin example as a counter-example to his thesis, rather than as a support for it. It is now 3:45 P.M., Tuesday. If I was at the bank just 10 minutes ago, found it open, conducted business there, and found out that they were scheduled to be open until 4:30 today and on Tuesdays in general, “The bank is now open” seems quite assertable for me: If a friend was wondering whether the bank is open to help her decide whether to walk over there, I wouldn’t hesitate at all to make the assertion. If I was at the bank just one week ago on Tuesday afternoon, and found that the bank’s hours included their being open on Tuesdays until 4:30, then I also seem to be well-enough positioned to assert that the bank is now open. If we have to assign numerical values to the degree of assertability of “The bank is now open,” I suppose that this values would be 1 or at least close to 1 in each of these cases. But if I was last at the bank 15 years ago, but do remember that it was then open on Tuesdays until 4:30 in the afternoon then, I seem not to be in a position to assert that it is open now; the assertability here seems to be 0, or quite close to 0. The bank’s hours 15 years ago just doesn’t give me enough of a basis to assert that it’s open now. I suppose there should be some time in a continuum of cases such that, if I was last at the bank that long ago, and discovered that it was open on Tuesdays until 4:30, the assertability of “The bank is now open” would be about .5. This would be some kind of borderline case, where it’s in the grey zone between my being positioned to make the assertion or not — perhaps if I was at the bank about 4-1/2 months ago? I think, though, that the probability of the bank’s being open, at least in most contexts, would have to be much higher than .5 before the assertability of the sentence would be such a borderline case. If someone just tossed a fair coin, and I haven’t found out how the coin landed, so that the probability of it’s having landed heads is .5 for me, I would judge the assertability of “The coin landed heads” to be very low — approaching, if not equal to, 0, if I had to assign a numerical value. What business have I to assert that, when it’s no more likely than not? I don’t see how anybody would judge that the assertability here was .5 unless they were already assuming that assertability is equal to probability. Likewise for the conditional. If a coin is fair, so that the conditional probability of its landing heads if tossed is .5, I would judge the assertability of Jackson’s ‘If I toss this fair coin, it will land heads’ to be very low — approaching if not equal to 0. And again, I don’t see how anybody could think that its assertability is .5 unless they were already assuming that assertability here goes by conditional probability. Lewis’s accounts, both of the general case and the conditional case — that assertability kicks in when the probability (whether simple or conditional) gets sufficiently close to 1 — seems much closer to the facts.

18. Lottery, conditional lottery.

19. Of course, where you are God, and therefore infallible, you might not want to assert anything unless its probability is 1. We can treat that as a limiting case of what it would be for a probability to be “sufficiently close to 1.” Then, in the case of indicative conditionals, you’ll only assert them if the relevant conditional probability is 1.

20. Ref. I’m at least unaware of anyone using this terminology before Jackson.

21. Ref. to “Indicative Conditionals”.

22. Well, at least one must be wrong if validity is understood in the usual way — as the impossibility of the premise being true while the conclusion is false. Those who don’t think indicative conditionals have truth conditions will often propose other relations between premises and conclusions to stand in for validity, as understood above, and some such relations will be such that they really do hold for “~A or C A –>C“, but not for “~A A –>C“. With some plausibility, such relations can be put forward as what validity really is — as opposed to the usual understanding — or, at least, it can be claimed that validity can be understood in such a way, that our intuitions are responding to some such understanding, and that, therefore, our intuitions really are both correct after all.

23. By Alvin Plantinga, in correspondence.

24. Gibbard (1980), pp. 226-229, 231-234.

25. In Gibbard’s story, Pete is playing Poker. Some readers, however, don’t know much about Poker, and rather than explaining that game, I am using a simpler, made-up game, where the relevant rules are easier to explain. Also, in Gibbard’s story, your two henchman each hand you a note, and you are unable to tell which note came from which sort. I’ve changed that to accommodate the different philosophical lessons I’m looking to draw from the story.

26. Why not just say that this would be a violation of the Law? Some would try to preserve the Law, while retaining the truth of both reports, by appealing to extreme context-sensitivity: only If Pete risks he will win is Sigmund-true; only If Pete risks he will not win is Snoopy-true.

27. See Gibbard (1980), the bottom paragraph on p. 231. Gibbard is arguing for the non-falsehood of slightly different, past-directed conditionals. He relies on the point that neither henchman — in Gibbard’s telling, they’re named Zack and Jack — is making any relevant mistake, but does not argue that the relevant facts of which they’re ignorant are incapable of rendering their statement false.

28. Miscommunications can result if a conditional is passed on without regard for its basis. Sigmund can tell you that “If Pete takes the risk, he will win,” but if you pass that along to Pete, he will be misled. Indeed, if you’re a competent speaker, and you know Sigmund’s basis for his claim, you’ll know not to pass it along to the deliberating Pete. Problems may arise if you don’t know Sigmund’s basis. Here, if you wrongly assume that his basis is that he’s seen both players’ cards, you might pass the information along to Pete — who will then be confused, and will have to conclude that he is getting some bad information, either from Sigmund, who’s informed him that Gus is holding 83, or from you, who’s told him that if he takes the risk, he’ll win. But we often miscommunicate when we make false assumptions, so it should be no surprise that you’ll make this mistake in communication when you falsely assume that Sigmund’s basis is such as to render his conditional deliberatively useful.

Skip to toolbar