written by Eric J. Ma on 2016-08-18
I've recently been considering the problem of when to stop measuring something, especially in the biological sciences. Turns out, John Kruschke has given this a ton of thought, in the psychological sciences.
I listened to a talk, freely available on YouTube, about how to decide when to stop collecting experimental data. It's a very good talk, in which Kruschke states why precision should be the goal of measurement, and not acceptance or rejection of some hypothesis.
The main problems with stopping data collection when some rejection or acceptance criteria has been fulfilled is that the reported measured parameter estimates will have a tendency to be biased away from some true value. Yet, "collecting data till I get some p < 0.05" pretty much sums up the mentality of a large swathe of experimental biologists.
I think the more rational way to approach scientific measurement is to treat it the way that the physicists do. They're out to measure some property of nature, in an "absolute" sense, and then quantify the uncertainty surrounding that measurement. For biologists, oftentimes, we're out to measure some property relative to some control, and so the uncertainty surrounding the computed difference should be calculated.
Kruschke advocates for stopping measurements when we reach a precision that is better than some Region Of Practical Equivalence (ROPE, he's good with acronyms). There's no free lunch here - we have to define ahead of time what the width of that region looks like. He also acknowledges that there are scenarios where the best precision of our measurements, or the natural variation/noise in the population, preclude the collected data from converging on the desired precision. Even then, the data collected are valuable: it's informative about the best precision that we can achieve.
The focus on precision here is because, given an agnostic state towards nature, true parameter values cannot be known, and can at best only be estimated through measurement. Naturally, scientists should be interested in accuracy, and not just precision, and I think continued, repeated measurements are our best bet at getting there, assuming our measurements are set up right.
A final thought: I'm a proponent of getting rid of publishing only "significant" (statistically or otherwise) studies, and I agree with Kruschke that scientists should be publishing studies that pose an unexplored and well-motivated question and report uncertainty in whatever their measurements are. Evaluating whether something is "well-motivated" is a tough thing to do, and it's difficult to use easy proxies to evaluate motivation, but I think it's the part that makes scientific inquiry interesting.
@article{
ericmjl-2016-precision-rejectionaccpetance,
author = {Eric J. Ma},
title = {Precision, and not Hypothesis Rejection/Acceptance!},
year = {2016},
month = {08},
day = {18},
howpublished = {\url{https://ericmjl.github.io}},
journal = {Eric J. Ma's Blog},
url = {https://ericmjl.github.io/blog/2016/8/18/precision-and-not-hypothesis-rejectionaccpetance},
}
I send out a newsletter with tips and tools for data scientists. Come check it out at Substack.
If you would like to sponsor the coffee that goes into making my posts, please consider GitHub Sponsors!
Finally, I do free 30-minute GenAI strategy calls for teams that are looking to leverage GenAI for maximum impact. Consider booking a call on Calendly if you're interested!