COMPARISON OF CONVENTIONAL, BALANCED AND SUFFICIENT BOOTSTRAPPING APPROACHES VIA CONFIDENCE INTERVALS AND EFFICIENCY

There are various bootstrapping approaches depending on how bootstrap samples are selected. The conventional bootstrapping obtains random bootstrap samples by using all the units in the original sample. Balanced bootstrapping based on having individual observations with equal overall frequencies in all bootstrap samples and sufficient bootstrapping based on using only the distinct individual observations instead of all the units in the original sample are the two basic attempts proposed in this manner. This study compares the balanced, sufficient and conventional bootstrapping approaches in terms of efficiency, bootstrap confidence interval coverage accuracy, and average interval length. Although sufficient bootstrapping approach resulted in more efficient estimators and the narrower confidence intervals than the other two in all cases, none of the actual coverage level of confidence intervals was controlled within the desired limits. Conventional and balanced bootstrapping approaches have given quite similar results in terms of efficiency, coverage accuracy and average length.

___

  • Boos, D. D. and Hughes-Oliver, J. M., “How Large Does n Have to be for Z and t Intervals?”, American Statistician, 54, 121–128, 2000.
  • Wilcox, R. R., Introduction to Robust Estimation and Hypothesis Testing: 4th Edition, Academic Press, 2017.
  • Zhou, X.-H. and Gao, S., “One-sided confidence intervals for means of positively skewed distributions”, American Statistician, 54, 100–104, 2000.
  • Zhou, X. H. and Dinh, P., “Nonparametric confidence intervals for the one-and two-sample problems”, Biostatistics, 6, 187–200, 2005.
  • Kuonen, D., “Studentized bootstrap confidence intervals based on M-estimates”, Journal of Applied Statistics, 32, 443–460, 2005.
  • Banik, S. and Kibria, B. M. G., “Comparison of some parametric and nonparametric type one sample confidence intervals for estimating the mean of a positively skewed distribution”, Communications in Statistics—Simulation and Computation, 39, 361–389, 2010.
  • Davison, A. C., Hinkley, D. V. and Schechtman, E., “Efficient bootstrap simulation”, Biometrika, 73, 555–566, 1986.
  • Davison, A. C. and Hinkley, D. V., Bootstrap methods and their application., Cambridge University Press, 1997.
  • Do, K.-A. and Hall, P., “Quasi-random resampling for the bootstrap”,. Statistics and Computing, 1, 13–22, 1991.
  • Singh, S. and Sedory, S. A., “Sufficient bootstrapping”, Computational Statistics and Data Analysis, 55, 1629–1637, 2011.
  • Efron, B. and Tibshirani, R. J., An Introduction to the Bootstrap, Chapman and Hall, 1993.
  • Serfling, R., Approximation theorems of mathematical statistics, Wiley, 1980.
  • R Core Team, R: A language and environment for statistical computing, 2019.
  • Ng, H. K. T., Filardo, G. and Zheng, G., “Confidence interval estimating procedures for standardized incidence rates”, Computational Statistics and Data Analysis, 52, 3501–3516, 2008.
  • Hoaglin, D. C. (eds. Hoaglin, D. C., Mosteller, F. and Tukey. J.), Summarizing Shape Numerically: The g-and-h Distributions. in Exploring Data Tables, Trends, and Shapes, 461–513, John Wiley and Sons, 2011.