Home

Why we (usually) don't have to worry about multiple comparisons

Andrew E. Gelman; Jennifer Hill; Masanao Yajima

Title:
Why we (usually) don't have to worry about multiple comparisons
Author(s):
Gelman, Andrew E.
Hill, Jennifer
Yajima, Masanao
Date:
Type:
Articles
Department:
Statistics
Permanent URL:
Abstract:
Applied researchers often find themselves making statistical inferences in settings that would seem to require multiple comparisons adjustments. We challenge the Type I error paradigm that underlies these corrections. Moreover we posit that the problem of multiple comparisons can disappear entirely when viewed from a hierarchical Bayesian perspective. We propose building multilevel models in the settings where multiple comparisons arise. Multilevel models perform partial pooling (shifting estimates toward each other), whereas classical procedures typically keep the centers of intervals stationary, adjusting for multiple comparisons by making the intervals wider (or, equivalently, adjusting the p-values corresponding to intervals of fixed width). Thus, multilevel models address the multiple comparisons problem and also yield more efficient estimates, especially in settings with low group-level variation, which is where multiple comparisons are a particular concern.
Subject(s):
Statistics
Item views:
149
Metadata:
text | xml

In Partnership with the Center for Digital Research and Scholarship at Columbia University Libraries/Information Services | Terms of Use