*See the end of this post for how the class might work and details about the video.*

**Video**

*Yes, the lighting is terrible. I give sallow a bad name.*

**Lecture**

There are really only two areas of philosophy when it comes to science: that of what is, and that of how we know of what is. We need both and neglect neither, though we will concentrate on the latter; epistemology. This is a course on philosophy, mathematics and science. But geared toward understanding understanding, and not so much on making new science, which I’ll leave to you.

There are those things we know with certainty, those in which we have only uncertainty, and an even huger array of things we don’t even know we don’t know about. That “huger” is itself an uncertain guess. After all, how we can know all of what we don’t even know we don’t about?

Well, skip that question for now. Let’s think about thinking first.

Our texts will primarily be three: *Uncertainty*, from me, *Probability Theory: The Logic of Science*, from ET Jaynes, and David Stove’s *The Rationality of Induction*. It would be well to acquire and read all three. But if you can’t, I’ll try and give you what you need. Bits and pieces are all online.

Let’s start with Chapter 1 of Jaynes, in which he suggests taking a leisurely look at problems of uncertainty. We will take our time. This will seem too easy, even trivial, at first.

Jaynes starts with logical deduction.

Maybe the easiest is this logic (not in Jaynes): “If A (a proposition) is true, then A is true”. Seems easy enough, anyway. But think. There’s a lot going on here, more than what exists between the quotation marks. Much more. At the least there are words and the words are in a certain order, not to mention the punctuation. All of this information is *tacit* and not specified in the deduction. *But it is always there.*

So it may seem obvious and rational to see that the sentence is true—rationalists congratulate themselves for this vision—but we cannot *know* it is true without *assuming* a lot of other things. Some of which we did *not* learn by rationalist deduction, and we know by other means. What means?

Hold that thought in the back of your mind, and we’ll return to it in good time. For now, we’ll retreat to the standard logical beginning. Jaynes offers “modus ponens” (don’t sweat the names), like this, remembering “A” and “B” are propositions:

If A is true, then B is true.

A is true.

Therefore, B is true.

We accept for the sake of argument that “If A is true, then B is true.” For some A and B. It does not mean the sentence fits for *any and all* propositions A and B. That’s another subject to which we’ll return in time, when we examines Stove’s arguments that logic is not formal. But let that pass for now, too.

Another way to say this is that B follows from A. It is discovered or assumed A is true. Thus B must follow.

This does not mean A *caused* B every time. If A does cause B every time, and with no proviso or limitations or contingencies, then if A is true, then it must be that B is caused, and therefore B is true.

Our deduction is logical and not necessarily causal, though, yet it can be causal. Jaynes gives an example like A = “It is raining”, and B = “It is cloudy.” Clearly, if it is raining it is cloudy—unless the Devil is beating his wife. And that if we notice it is raining, we deduce it is cloudy. Or that the rain comes from clouds in the case of Satanic domestic abuse. The logic works.

But the causal direction is backwards. The clouds, in part, and as one of the parts of a cause, cause the rain. The rain doesn’t cause the clouds. Which brings up another immensely important point: cause. We need to fully understand what we mean by that word. We’ll let that pass for now, too, and let our common understanding hold sway.

The point is that we can have logical certainty without causal certainty. We can even get the cause **wrong** and still have logical certainty.

The second point is that “If A then B” is *subjective*. It is an *assumption* we make. So is the assumption of the second premise, that A is true, subjective. But once we have made these subjective choices, the logic flows in a rigorously objective fashion. So logic is both subjective and objective.

Our next example inverts the first (“modus tollens”):

If A is true, then B is true.

B is false.

Therefore, A is false.

Typically this will be proved using symbols and a sort of mathematical apparatus (truth tables or line-by-line logical analysis). These methods require some real work to ensure you grasp them. But maybe we can do it an easier way.

Again we start with the subjective—hey, we picked it—sentence “If A is true, then B is true.” Then we assume or see that B is false.

The *only* way we know, or rather assume, that B can be true, logically, is if A is true. We said. We see that B isn’t true. Which means that A must be false, because if it wasn’t, A would be true and thus B would be true.

Next example.

If A is true, then B is true.

B is true.

Therefore, A is true.

This is a well known fallacy; the conclusion does not follow logically from the premises. How do we prove that? Usually by contradiction, by imagining other ways we can know B is true, like this:

A → B

C → B

D → B

E → B

…

There are, we imagine, lots of other ways that we can know B is true, A being only one of them. It could be that A is false and C is true, or D, or and so on.

What is important, even crucial, to grasp is that these premises are also tacit, part of the argument, and that we supply the list. For it could be, in a causal sense even, and we just don’t know it, that the *only* way for B to be true is if A is, then this argument is *not* a fallacy—if we add that tacit premise to it. Just as we add that picture as a tacit premise to prove it is a fallacy.

Let’s continue.

If A is true, then B is true.

B is true.

Therefore, A because more plausible.

We have moved from strict logicality to its uncertain version. This is where the some quail, and dismiss the conclusion as logical. They do this, I think, because the traditional way of manipulating symbols to prove logical arguments don’t allow for this sort of thing. But pictures can.

A → B

C → B

D → B

E → B

…

W = (w_1, w_2, w_3, …)

Just like in proving the fallacy, we imagine, or assume, or accept, that there are lots of ways for B to be known to be true: “If A is true then B is true”, or “If C is true then B is true”, and so on. There is some number of these ways, which we don’t even have to know, but we could, and even do in some cases. We’ll get to those examples another day. There is also a World of propositions W, which are infinite, that have nothing to do with B, which we also do not have to know, except that they exist.

Now if B is true, then we know it has to be A or C or D or whatever, and still nothing in W, that are true. So A indeed has become more plausible. If you don’t love this, consider we used the *exact same move* to prove the fallacy above.

Another example:

If A is true, then B is true.

A is false.

Therefore, B because less plausible.

It should be easy enough to see—using the same picture—that if A is ruled out as a reason for B, then while one of the other reasons for B being true might hold (C through etc.), there is at least *one less reason* for B being true. Which is to say, A.

So it does indeed follow *logically* that because A is false B is now less plausible.

*Less plausible* is another way to say *less likely*, and that is another way of saying *less probable*.

We have just proved in these two examples probability is a matter of logic. There was nothing subjective about the proof, nor anything about “relative frequencies”, nor anything about a strength of evidence, per se, nor about causality, which we’ve already seen doesn’t count. There isn’t anything here except logic.

All we did was use the very implicit and tacit premises that people use to show formal versions of some arguments are fallacies, tacit premises which are forgotten in writing down formal truth tables in the like.

Now I don’t mean this as a complete proof, as there are all kinds of niceties to discuss. This is just a tease; more is to come.

Let’s end with one more example, also from Jaynes, which is your homework:

If A is true, then B is becomes more plausible.

B is true.

Therefore, A because more plausible.

HOMEWORK: HOW CAN THIS BE SO?

Here endeth the lesson. We have done *three* pages of Jaynes, with only 647 to go. Not to mention the other books.

**Class Notes**

Even though we only did a couple of arguments, I have the idea I rushed through everything in the video. But it’s a half hour already! I think, too, I need to get a lavalier microphone, since this one is not cutting it when I turn toward the board.

I have no idea how anybody can “sign up” for this class, or even if that’s a necessity.

Homework and questions: I’ll read all comments, but I can’t take the time to answer all questions, if there are more than one or two, and because I’m posting this video in various places. I’ll take what I think it necessary and *answer them in the next lesson*.

All ideas welcomed.

**Subscribe or donate to support this site and its wholly independent host using credit card click here****. Or use the paid subscription at Substack. Cash App: $WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.**

This reminds me of pre-calculus class, where at the close of 10th grade, the teacher said, "If I had known how dumb all of you really are, I´d have spent the whole semester telling jokes!"

Absolutely interesting lecture, William! Your unique twist has breathed new life into this age-old topic. I look forward to more of the same. There's a lot of hand ringing out there about what Jaynes is saying about probability and statistics. I love the theory but in application is see nothing but rubbish so I'm inclined to side with him. Thank you for doing this.