The example of number of children in school classes provided by Anon is illustrative. Neither the average nor the standard deviation are of any use in this case. There are only a finite number of possibilities (most likely, there is an upper bound; for Germany this bound is around 33; any school class larger than this is due to exceptional circumstances). Any decent reporting of class sizes should come in the form of a histogram, period.
"On the internet, however, the common opinion is that resolution is increased by repeating a measurement many (infinite) times. Example: sea level is measured with a resolution of 1 cm, but by taking many measurements, it is claimed to get a mean value with a precision of, say, up to 1 mm"
This is possible because uncertainty is multiplicative (Bayesian). So if there is a 0.25 error per measurement, then two measurements (.25 x .25) = 0.0625 and so on until you converge on 1.11cm with a 0.0000001 error. It is horse-patooty, of course, and most times disingenuous.
What if there is a huge difference between the data points? Random scatter? Fraud? Obviously, this is abused and used in all sort of ways, as Mr. Bragg said. What does a "mean" mean when the data is a total scatter or skewed? This is why housing prices and sales are (or should be) reported as medians, not means.
I likely don't have all this right in my head this time of day and only 0.90025 cup of coffee in me.
I'd go with the questioner on this. Reporting a highly precise mean is correct wrt the math itself, but reporting also implies confidence in the 'perfect meanness' of my visual estimates. I'm guaranteeing that my visual estimates are perfectly normally distributed. But that's rarely true. Every eye and every visual perception system has consistent preferences, which can't be factored out by any amount of math.
The example of number of children in school classes provided by Anon is illustrative. Neither the average nor the standard deviation are of any use in this case. There are only a finite number of possibilities (most likely, there is an upper bound; for Germany this bound is around 33; any school class larger than this is due to exceptional circumstances). Any decent reporting of class sizes should come in the form of a histogram, period.
Yes. Ideally every measurement should be a graph, not a number.
Mr. Anon,
Regarding your remark:
"On the internet, however, the common opinion is that resolution is increased by repeating a measurement many (infinite) times. Example: sea level is measured with a resolution of 1 cm, but by taking many measurements, it is claimed to get a mean value with a precision of, say, up to 1 mm"
This is possible because uncertainty is multiplicative (Bayesian). So if there is a 0.25 error per measurement, then two measurements (.25 x .25) = 0.0625 and so on until you converge on 1.11cm with a 0.0000001 error. It is horse-patooty, of course, and most times disingenuous.
What if there is a huge difference between the data points? Random scatter? Fraud? Obviously, this is abused and used in all sort of ways, as Mr. Bragg said. What does a "mean" mean when the data is a total scatter or skewed? This is why housing prices and sales are (or should be) reported as medians, not means.
I likely don't have all this right in my head this time of day and only 0.90025 cup of coffee in me.
I'd go with the questioner on this. Reporting a highly precise mean is correct wrt the math itself, but reporting also implies confidence in the 'perfect meanness' of my visual estimates. I'm guaranteeing that my visual estimates are perfectly normally distributed. But that's rarely true. Every eye and every visual perception system has consistent preferences, which can't be factored out by any amount of math.