Investigating the Performance of Omega Index According to Item Parameters and Ability Levels

Investigating the Performance of Omega Index According to Item Parameters and Ability Levels

Purpose: Several studies can be found in theliterature that investigate the performance of ω undervarious conditions. However no study for the effects of item difficulty, item discrimination, and ability restrictions on the performance of ω could be found. The current study aims to investigate the performance of ω for the conditions given below. Research Methods: b parameter range was restricted in two levels (-2.50 – 0.00, 0.01 – 2.50); a parameterrange, in two levels (0.10 – 0.80 and 0.81 – 1.50). Aftercrossing a and b parameter ranges, four differentitem parameter cells were obtained. 10,000 examinee responses were generated for each itemparameter cell for 20 items. After combining four data sets, an 80-itemdataset was obtained.In order to obtain the effects of source’s and copier’s ability levels to the performance of ω,ability range was divided into four intervals (-3.00 – -1.50, -1.50 – 0.00, 0.00 – 1.50 and 1.50 –3.00). By crossing the ability ranges of source and copier, sixteen different combinations wereobtained. Each of the sixteen ability pairs of source and copier cheating was investigated foritem parameter crossing cells for power study of ω. For Type I error study, no cheating datawere investigated for the same conditions and levels. Findings: Type I error inflations wereobserved for the lower copier ability levels. The results of the power study indicate that whenhigh ability level copier copied answers of the low difficulty level and high discriminativeitems from high ability level source, power of ω was weakened. Implications for Researchand Practice: The study suggests that researchers must pay attention to copiers - source abilitylevel and copied items' difficulty levels while using ω index for detecting answer copying.

___

  • Angoff, W.H. (1974). The development of statistical indices for detecting cheaters. Journal of American Statistical Association, 69, 44-49.
  • Anikeef, A.M. (1954). Index of collaboration for test administrators. Journal of Applied Psychology, 38, 174-177.
  • Armstrong, R. D., & Shi, M. (2009). A parametric cumulative sum statistic for person fit. Applied Psychological Measurement, 33(5), 391-410.
  • Assessment Systems Corporation (1993). Scrutiny!: Software to identify test misconduct. Advanced Psychometrics.
  • Bay, M. L. G. (1994). Detection of copying on multiple-choice examinations (Doctoral dissertation, Southern Illinois University, 1987). Dissertation Abstracts International, 56(3-A), 899.
  • Bellezza, F.S., & Bellezza, S.F. (1989). Detection of cheating on multiple-choice tests by using error-similarity analysis. Teaching of Psychology, 16, 151-155. British Journal of Arts and Social Sciences ISSN: 2046-9578 59.
  • Belov, D. I. (2011). Detection of answer copying based on the structure of a high-stakes test. Applied Psychological Measurement, 35(7), 495–517.
  • Belov, D. I., & Armstrong, R. D. (2010). Automatic detection of answer copying via kullback-leibler divergence and K-index. Applied Psychological Measurement, 34(6), 379–392.
  • Bird, C. (1927). The detection of cheating in objective examinations. School and society, 25, 261-262.
  • Bird, C. (1929). An improved method of detection cheating in objective examinations. Journal of Educational Research, 25, 261-262.
  • Cizek, G. J., & Wollack, J. A. (2017). Handbook of quantitative methods for detecting cheating on tests. New York, NY: Routledge.
  • Clark, J. M. (2010). Aberrant response patterns as a multidimensional phenomenon: Using factor-analytic model comparison to detect cheating. ProQuest LLC. University of Kansas.
  • Deng, W. (2008). An innovative use of the standardized log-likelihood statistic to evaluate person fit. Dissertation Abstracts International Section A: Humanities and Social Sciences. Rutgers State University of New Jersey.
  • Frary, R. B. (1993). Statistical detection of multiple-choice answer copying: Review and commentary. Applied Measurement in Education, 6, 153-65.
  • Frary, R. B., Tideman, T. N., & Watts, T. M. (1977). Indices of cheating on multiple- choice tests. Journal of Educational Statistics, 6, 152-165.
  • Hanson, B. A., Harris, D. J., & Brennan, R. L. (1987). A comparison of several statistical methods for examining allegations of copying (ACT Research Report Series No. 87-15). Iowa City, IA: American College Testing.
  • Harpp, D.N., Hogan, J.J., & Jennings, J.S. (1996). Crime in the classroom – Part II, an update. Journal of Chemical Education, 73(4), 349-351.
  • Holland, P.W. (1996). Assessing unusual agreement between the incorrect answers of two examinees using the K-index: statistical theory and empirical support (Research Report RR-94-4). Princeton, NJ: Educational Testing Service.
  • Hui, H.-fai. (2010). Stability and sensitivity of a model-based person-fit index in detecting item pre-knowledge in computerized adaptive test. Dissertation Abstracts International Section A: Humanities and Social Sciences. University of Hong Kong.
  • Luecht, R. M. (2011). Gen3PL Raw Data (Version 2). Greensboro, NC: [Author].
  • Maynes, D. D. (2009, April). Combining statistical evidence for increased power in detecting cheating. Presented at the annual conference of the National Council on Measurement in Education, San Diego, CA.
  • Saupe, J.L. (1960). An empirical model for the corroboration of suspected cheating on multiple-choice tests. Educational and Psychological Measurement, 20, 475-489.
  • Shu, Z. (2011). Detecting test cheating using a deterministic, gated item response theory model. (Doctoral dissertation, The University of North Carolina at Greensboro, 2010). Dissertation Abstracts International Section A: Humanities and Social Sciences.
  • Sotaridona, L.S., & Meijer, R.R. (2002). Statistical properties of the K-index for detecting answer copying. Journal of Educational Measurement, 39, 115-132.
  • Sotaridona, L. S., & Meijer, R. R. (2003). Two new statistics to detect answer copying. Journal of Educational Measurement, 40, 53-69.
  • Sotaridona, L.S., van der Linden, W.J., & Meijer, R.R. (2006). Detecting answer copying using the kappa statistic. Applied Psychological Measurement, 30, 412-431.
  • R Core Team. (2016). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from http://www.R-project.org/
  • van der Linden, W. J., & Sotaridona, L.S. (2004). A statistical test for detecting answer copying on multiple-choice tests. Journal of Educational Measurement, 41, 361- 378.
  • van der Linden, W. J., & Sotaridona, L.S. (2006). Detecting answer copying when the regular response process follows a known response model. Journal of Educational and Behavioral Statistics, 31, 283-304.
  • van Krimpen-Stoop, E. M. L. A., & Meijer, R. R. (2001). CUSUM-based person-fit statistics for adaptive testing. Journal of Educational and Behavioral Statistics, 26(2), 199–217.
  • Wesolowsky, G. O. (2000). Detecting excessive similarity in answers on multiple choice exams. Journal of Applied Statistics, 27(7), 909-921.
  • Wollack, J. A. (1997). A nominal response model approach to detect answer copying. Applied Psychological Measurement, 21, 307-320.
  • Wollack, J. A. (2003). Comparison of answer copying indices with real data. Journal of Educational Measurement, 40, 189-205.
  • Wollack, J. A. (2006). Simultaneous use of multiple answer copying indexes to improve detection rates. Applied Measurement in Education, 19, 265-288.
  • Wollack, J. A., & Cohen, A. S. (1998). Detection of answer copying with unknown item and trait parameters. Applied Psychological Measurement, 22, 144-152.
  • Wollack, J. A., & Maynes, D. D. (2017). Detection of test collusion using cluster analysis. In G. J. Cizek and J. A. Wollack (Eds.), Handbook of quantitative methods for detecting cheating on tests (pp. 124-150). New York, NY: Routledge.
  • Zopluoglu, C. (2016). Classification performance of answer-copying indices under different types of irt models. Applied Psychological Measurement, 40 (8), 592- 607.
  • Zopluoglu, C., & Davenport, E.C., Jr. (2012). The empirical power and type I error rates of the GBT and ω indices in detecting answer copying on multiple-choice tests. Educational and Psychological Measurement, 1-26.