No Confidence (Interval) In Parameters
Class 64
Many think confidence intervals are an improvement over P-values. Not really, no. There is no one alive, or dead, who interprets a confidence interval as theory demands. All (as in all) are Bayesians here. Which means you might as well go all the way and compute Pr(What I want to know | All evidence considered), and ignore all testing and parameter-based approaches.
Video
Links: YouTube * Twitter – X * Rumble * Bitchute * Class Page * Jaynes Book * Uncertainty
HOMEWORK: Given below; see end of lecture.
Lecture
This is an excerpt from Chapter 9 of Uncertainty.
Lastly, because confidence intervals are sometimes seen as the fix or alternative to p-values, let me prove to you nobody ever gets these curious creations correct. According to frequentist theory, the definition of a confidence interval (for a parameter) is this. If an experiment is repeated an infinite number of times, each one identical” to the last except for “random” differences (ignore that this is meaningless), and for each experiment a confidence interval is calculated, then (say) 95% of these intervals will overlap or cover” the “true” value of the parameter. Since nobody ever does an infinite number of experiments, and all we have in front of us is the data from this experiment, what can we say about the lone confidence interval we have? Only this: that this interval covers the “true” value of the parameter or it doesn’t. And that is a tautology, meaning it is always true no matter what, and, as we learned earlier, tautologies add no information to any problem.
We cannot say—it is forbidden in frequentist theory—that this lone interval covers with such-and-such a probability. And even if we manage to repeat the experiment some finite number of times, and collect confidence intervals from each, we cannot use them to infer a probability. Only an infinite collection, or rather one in the limit, will do. If we ever stop short and use the finite collection to say something about the parameter, we reason in a logical and not frequentist fashion. And if we use the length of an interval to infer something about the parameter, we also reason in a logical and not frequentist fashion. Since the majority of confidence intervals in use imply a “flat” (improper, usually) prior on the parameter of interest, all working frequentists are actually closet Bayesians. Now all we have to do is take the short step from Bayes to logic, and probability will be on firm ground everywhere.
Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use PayPal. Or use the paid subscription at Substack. Cash App: $WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank. BUY ME A COFFEE.



A wonderful and refreshing lecture, as always; thank you again for your work! It is a cool drink of water after a long journey through a hot desert. I've not been alone in my angst over the insanity of the Pr(Data we didn’t see | H false) approach.
Of course. The men are all from Harvard. Hahahahahahaha.