4 Comments

I just finished teaching a 3-week "Bayesian Inference and Reasoning" course at the African Institute for Mathematical Sciences in Cape Town. The hardest step, as it has been elsewhere where I have taught such a course, was the transition from a discrete to a continuous set of hypotheses. Even knowing how difficult this step is, I needed three attempts to get it mostly right.

(Like your _Uncertainty_ book, the course is very much inspired by Jaynes's approach.)

Expand full comment
author

Thanks for sticking with it.

Expand full comment

William, just to clarify—how are we counting defective widgets in your scenario? If we check every single widget, we'd know the exact number, and there'd be no need for statistics at all. So, I assume we're sampling, right? Random sampling, it must be (Yoda). I know you and randomness don’t always get along, but if the sample isn’t random, how can we be sure it represents the entire population of widgets from that machine?

So, let's say we sample (random or not), and that gives us an initial prior. Now, we start updating the posterior. But how many times do we update? From what I gathered in your lecture, it seems there’s no real end to this process. Theoretically, don’t we just end up counting all the widgets anyway? If that’s the case, why even bother with the whole sampling and Bayesian process in the first place? Practically speaking, what’s the point if it leads us to count everything anyway?

Expand full comment
author

Yes, sampling. To predict future proportions of bad widgets.

I should have made that clear, because your point is the one I am continually harping on.

Expand full comment