Video
Links:
It still feels like I’m rushing things, but the videos are already a half hour. I can’t see your faces so it’s difficult to tell. Let me know about the tempo.
I also said I could not think of a homework in the video. Well, I have.
HOMEWORK now in video.
Lecture
A few wondered when we were going to get to the math, and that they didn’t need all this philosophy and could rely on their intuitions. This is not so. You can’t use math on things without knowing what the meaning of the use is. And to understand meaning we need philosophy. There is no escaping this.
Everyone has a philosophy—saying you have none is flat out false—so it’s well we get the right one. Every time you use a model you are invoking some kind of philosophy, very likely inconsistently, if you’re like most. Intuition can be great, and it absolutely necessary, as we’ll see. But it can also mislead, especially when things become difficult, as they’re about to do.
On the other hand, we meet at the end of the lecture an example of the kind of interminable nitpicking that gives philosophy a bad name. But it’s a useful example, because it shows the damage to thought that can be caused by not keeping in mind the crucial distinction between local and necessary truths.
This is an excerpt from Chapter 1 of Uncertainty.All the references have been removed.
Faith
Faith is another difficult word. It has connotations of trust and honesty, but also of religion. In religion it is used to describe a kind of belief or as a label for a system or practice, e.g. “the Methodist faith.” But you’ll have noticed I used it above when describing epistemology. It is not out of place. To repeat: we know axioms and the like are true because our intellects tell us they are, and we trust that our intellects are not misleading us; that is, we have faith in our inductions. Faith is in this sense ultimate belief, the ground of all our beliefs. Belief is a decision, an act on top of knowledge or uncertainty. We prove via induction an axiom is true. This is knowledge. And then we believe, or have faith (if you like), in this knowledge. Of course, though our intuitions sometimes mislead us, it is false that they always do. Belief is not the same as knowledge because we can also believe that which is unlikely or uncertain, or even necessarily false. The practice of statistical hypotheses testing asks us to believe or have faith in the uncertain, in the unproved. The error is to assume that knowledge or probability and belief or faith are identical.
There is also a scurrilous definition of faith that it pleases some to state (see the Skeptic’s Dictionary), which goes something like this: “Faith is believing contrary to evidence.” It is possible to believe something you know is false, but the act is bound to cause distress. For example, I may claim to believe that I do not exist, based on who knows what evidence, but I am forced to confront myself when making the claim, which is psychically painful. I have to discount the knowledge, to pretend it doesn’t exist while knowing it does. Doublethink. If I say, “I take it on faith that I don’t exist”, then this would fit the skeptic’s definition. But nobody really believes statements like this. The proposition “I don’t exist” starts with its own disproof.
Anyway, the kind of skeptic who says faith is believing contrary to evidence is substituting sloganeering for actual argument. He has a set of premises which lead him to knowledge or high certainty of some proposition, call it “not-X” (e.g., “God does not exist”), and he calls his premises “the evidence”, which is fair enough. Except that his opponent has a different set of premises which he too calls “the evidence”. Who is right? Well, he who can show a valid sound deduction, he who does not mistake a conditional for a necessary truth. My favorite sound valid deduction in this vein about this proposition is by Feser.
A second tactic is for the skeptic to claim he has found a flaw in a proof for X. This may even be a genuine flaw for a given argument. If it is, and the skeptic is unable to persuade his opponent of it, but this opponent still claims to believe X based on the (flawed) proof, then the skeptic has a good example of somebody believing a claim contrary to evidence—but not contrary to faith. Love of bad arguments happens simply and frequently because most people are not well equipped to judge philosophical arguments at a deep level. What usually happens is that the opponent will hear the claim that a skeptic has found a flaw, and he might even believe this skeptic, but the opponent will still believe on other grounds. And this is not unreasonable unless the skeptic offers a necessary valid sound proof of not-X. If the skeptic hasn’t, then he commits the fallacy of supposing that demonstrating one argument for X is flawed then all are. When this happens, what the skeptic really wishes is that everybody would be like him.
This digression is not as odd as it might seem. Arguments shooting past their targets are found everywhere. Scientism and the politicization of science have increased the kinds of fallacies noted here.
Belief & Knowledge
The word belief is ambiguous: statements of belief can belie knowledge, certainty, faith, or even uncertainty. You can only know what is true, but you can believe many things. Belief (the word) is often accompanied by the idea of lying; many people lie and say they believe a thing, while secretly doubting or disbelieving. This is what makes politics. The dependability of a person’s public utterances accurately matching his actual state of mind depend strongly on his milieu. In repressive or totalitarian societies, like in the Soviet Union and some Western universities, the correspondence between public avowals and belief can be weak, or even negative.
We have to be careful and settle on one of the many definitions of belief. True belief (or just belief) is averring to or the acceptance of a conditional or a necessary truth. It is assent, or the acting as if some proposition were true, either necessarily or in the circumstances. As said above, belief is an act, a decision; it is not knowledge itself. I should believe conditional truths like “George wears a hat” given “All Martians wear hats and George is a Martian”. I had better believe it. Why? Because the rules of truth and of logic demand it. If I doubted, which is to say if I did not believe “George wears a hat” given this evidence, it must be because I am using different evidence than the propositions “All Martians etc.” What this different evidence is doesn’t matter, but I must have it. I may claim to hold with “All Martians etc.” but if I still don’t believe “George wears a hat” then I must also be accepting other evidence which contradicts or trumps “All Martians etc.”
We’re finally ready to tackle knowledge, which is a necessary truth. You cannot have knowledge of a conditional truth, but you can believe one. Rather, you can have knowledge of the conditionality of conditional truth. Knowledge is sometimes called “justified true belief”, the justification being that chain of sound valid argument which leads to indubitable axioms. This means (though we haven’t yet discussed them) we can’t have knowledge of probabilistic propositions (we can surely understand the propositions themselves, of course). It will turn out that propositions like “Given the evidence, the probability of X is p (the entire thing inside the quotation marks) is itself necessarily true: p is not true, mind, but the proposition in which it appears is.
Succinctly: we only know and must belief necessary truths, and we cannot know but can believe (and usually do) conditional truths.
There are other (more confusing) ways to think about knowledge. Here I paraphrase the well-known ideas of Laurence Bonjour (and use his notation p for a proposition). In order to know (the truth of) a proposition p in the “Cartesian conception of knowledge” (a theory!) three conditions must be met, the first two of which are: a person must believe or accept the proposition p without harboring doubt, and the person must have a reason or justification that guarantees the truth of p. The third condition is the strangest: p must be true.
But Bonjour, like most authors, does not separate necessary from conditional truths, nor do most authors recall the goal of the analysis of belief. I shall keep the distinctions. There are always two aspects to consider: whether something is necessarily or conditionally true, and what argument somebody is using to arrive at their proposition of interest. The failure to recognize these distinctions in truth opens up a curious situation called Gettier “problems”.
Here is an example. In a standard raffle somebody must win; via the rules of such games we therefore know and believe that p = “Somebody must win.” This is an existence proof, a statement of ontology, and a conditional truth. It is not a necessary truth because there is nothing proving it is logically necessary the raffle goes as planned (for instance, it may be played in Chicago). Who will win we do not learn until the drawing. If you are in the raffle it is therefore conditionally true, given the premises about standard raffles, that p = “I might win”. You believe this given the accepted rules of raffles and because you own at least one ticket. The conditional truth of p is the reason and justification for believing p; it is also the proof p is conditionally true. Again, p is not necessarily true.
The example is worth giving because of so-called Gettier problems, named for Edmund Gettier, the man who first inflicted them on philosophy. Gettier claimed there were situations in which a person has a justified true belief, yet that belief does not meet the test of knowledge because the statements p are not true. Keep p = “I might win” which you believe is true because your wife said she bought you a ticket for the raffle. Yet your wife was teasing; she didn’t buy a ticket, she only told you she did. However, unbeknownst to her or you, your mother did in fact buy you a ticket. Therefore you believe p, and indeed p is true, but, says Gettier followers, your belief cannot count as knowledge because your belief is based on a fiction (your wife’s joke).
Naturally, I do not account situations like these as problems in understanding uncertainty. Since truth is conditional, the conditions you use to judge the truth of p—your wife said she bought a ticket, your wife told the truth, the rules of raffles, etc.—prove p conditionally. That is, given you accept those premises p is true, you should believe p. P is also conditionally true given the alternate premises “your wife lied and your mother bought you a ticket” (and removing “your wife told the truth”). P is also conditionally true if you live in Chicago, don’t have a ticket but you get the wink from your alderman. There are many ways for p to be conditionally true. Your belief is driven by p’s truth conditional on whatever evidence you used to prove p conditionally true.
But p is not necessarily true; you do not have knowledge that it is. No one does. There is therefore no problem with the concept of knowledge as justified true belief. Why? The outside observer who is aware of what both your wife and your mother has done, and who also is aware of the rules of raffles, also believes p is true, though in his case he is closer to a necessary truth because he has removed more of the contingency than you have. And once again, as must be repeatedly emphasized, you can still can believe your conditional truth and act on it; so can the outside observer who knows of your wife’s shenanigans and your mother’s beneficence.
Gettier “problems” stem from misunderstanding which evidence is being used at what stages and by whom to judge conditional and necessary truths. As long as you keep these clear and distinct, the “problems” disappear. To be clearer: you argue from the premises “My wife bought me a ticket and this is a standard raffle” which is probative of p = “I might win”. P is true given the premises. But we know, i.e. we have justified true belief, that p is false given “The wife did not buy him a ticker.” We need the JTB account of knowledge in order to argue that something is wrong with it! The goal here is the problem. Is the goal to ensure p is true? Or is it to ensure the premises are themselves true or clean from as much contingency as possible? If our goal is to make predictions of p’s truth, then you will have made an accurate prediction. But if the goal is to assess the truth of the premises, then even though you are correct about p, you still fail because your premise, given the outside premise about the fact of the matter, is false. That means you only have local or conditional justified true belief because you only accepted contingent premises. But since most of the premises we accept are contingent, most accounts of JTB are contingent in the same way the lottery example was.
Because some consider the JTB account of knowledge to be “problematic” (again, how do they know this?) there have been many attempts at “restoring” the idea of knowledge to philosophy, such as “virtue” or “luck” epistemology. There is no hope of covering all these thrusts here, but it’s worth examining very briefly the idea of “epistemic luck”. The idea is that, in the absence of JTB, an “agent” (by which philosophers always means a person but somehow can’t bring themselves to say) hits upon an observable premise that is true. For some reason these accounts always focus on observables and not non-observable propositions that can be learned via induction (see Chapter 3). At any rate, suppose you are confronted by a multiple-choice quiz, with possible answers A–D. D, as it turns out, is the answer designed as correct by the teacher. But suppose you have no idea what the correct answer is, but you don’t want to leave the answer blank, so you choose D. So you “win”, just like in the lottery. Some philosophers want to say you hit upon knowledge because of your lucky guess—and it was luck. But this again mixes up the goal of the analysis. If your goal, as an epistemologist, is to check correct predictions, then indeed you nailed it. If the goal is instead to check the premises, you have failed. Why?
We learn later that you must have been arguing from premises similar to, “There are four possibilities, only one of which is correct, and I must select one.” The probability, given this premise, that you are right is 1/4. You might change the premises, if you know something of the subject and grammar, knowledge of the teacher, and so on; but none of that makes any difference. The point is that your premise was false in the sense that it should have been (something like) “Oho, I know what the question means and only one answer is possible.” Whether anybody can get a student to admit that he didn’t have the right premise and was only guessing limits the ability of a premise-analysis, which shows why multiple-choice questions are so poor at assessing knowledge. At any rate, luck had nothing to do with actual knowledge, which is always formed by premises of some kind. In this case, the premises were wrong. Justified-true-belief is in no danger.
Because true is such a useful word, and because necessarily true and conditionally true are cumbersome, like most people in ordinary speech, I will use true to mean either, relying on the context in most cases to define whether we have a conditional or necessary truth. But if there is sufficient ambiguity or the subject important, I’ll spell it out.
Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: $WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.
Very inspirational class today.
On faith and being an idiot for believing things without evidence.
Do cells exist? Not only brain cells, but any kind of cell.
I have heard the argument that cells do not exist in vivo, in physiologically good conditions, but that they form as living tissue is damaged or decays. The analogy here is: when a plate or a dish falls and breaks, it breaks into pieces, but the pieces are not constitutive parts of it. And if you use glue to put all the pieces together, you don't have a plate, but something that looks like a plate, but is only pieces of glass or ceramic glued together.
Now, I have always believed, without evidence, that histology and cytology are true. But, after I looked into that, it turns out that both sciences assume the cell exists normally. So there is some circular resoning involved in all this.
I have never seen cells in vivo. Has anyone seen them? I mean cells in an actually functioning organ inside a living being.
Do medical doctors and biologists assume this on faith, or do they know this as a fact?
I don't think it's rational to argue: cytology explains the phenomena we see in living organs, so cells (the assumption of cytology) do exist.
Am I wrong in having faith, without evidence, that the brain is composed of cells?
If cells do not exist normally, as we are taught, all the science of physiology and all the science of pathology, and the accompanying philosophy of disease, have to be rethought.
Which is not a task for mathematics or probability, but these two branches of knowledge have something to say about rethinking.
Notice that I don't claim that diseases do not exist. But I have doubts that the current dominating idea that all disease starts in cells is a true proposition.
Also notice that I don't claim that we have to be able to see something (a cell) to say that it exists (although, if something exists and it's visible, it would be great to see it.) My claim is that the current model may be false if the argument I have presented is true: that cells appear or are formed as a consequence of a disease.
Is this claim wrong?
In my experience, logic has as much to do with science as it has religion or politics or any other human endeavour. Novel writing, music, or gold-mining.
That is to say, it is an occasional passenger but I have never met any person nor seen any human endeavour where it is driving.
I've heard of some but investigation has shown them to be mythological creatures, not amenable to capture and inspection.