Detecting Careless Responses to Self-Reported Questionnaires

Detecting Careless Responses to Self-Reported Questionnaires

Problem Statement: The use of self-report questionnaires may lead to biases such as careless responses that distort the research outcomes. Early detection of careless responses in self-report questionnaires may reduce error, but little guidance exists in the literature regarding techniques for detecting such careless or random responses in self-report questionnaires. frequency distribution of true responses tends to be normally distributed while the existence of careless responses creates a skewed distribution to the right. The RGF of careless responses is higher than the RGF of true responses. Conclusion and Recommendations: RGF may be used as an indicator of respondent's careless responses in self-report questionnaires in which more accurate data are expected. Social science research that makes use of self-report questionnaire in measuring affective domain may compute RGFto determine whether careless responses exist

___

  • Begin, G., Boivin, M. & Bellerose, J. (1979). Sensitive data collection through the random response technique: Some improvements. Journal of Psychology. 101(1), 53-65.
  • Castro, R. (2013). Inconsistent respondents and sensitive questions. Field Methods. 25(3), 283-298. doi: 10.1177/1525822X12466988
  • Conijn, J.M., Emons, W.H.M., Van Assen, M.A.L.M., Pedersen, S.S., & Sijtsma, K. (2013). Explanatory, multilevel person-fit analysis of response consistency on the Spielberger state-trait anxiety inventory. Multivariate Behavioral Research, 48(5), 692-718.
  • Crede, M. (2010). Random responding as a threat to the validity of effect size estimates in correlational research. Educational and Psychological Measurement, 70(4), 596-612.
  • Crino, M.D. (1985). The random response technique as an indicator of questionnaire item social desirability/personal sensitivity. Educational and Psychological Measurement, 45(3), 453-468.
  • Godinho, A., Kushnir, V., & Cunningham, J.A. (2016). Unfaithful findings: Identifying careless responding in addictions research. Addiction, 111(6), 955- 956.
  • Keeley, J.W., Webb, C., Peterson, D., Roussin, L., & Flanagan, E.H. (2016). Development of a response inconsistency scale for the personality inventory for DSM-5. Journal of Personality Assessment, 98(4), 351-359.
  • Escobal, J., & Benites, S. (2013). PDAs in socio-economic surveys: instrument bias, surveyor bias or both? International Journal of Social Research Methodology, 16(1), 47-63. doi: 10.1080/13645579.2011.648420
  • Garcia, A.A. (2011). Cognitive interviews to test and refine questionnaires. Public Health Nursing, 28(5), 444-450. doi: 10.1111/j.1525-1446.2010.00938.x
  • Johnson, T. P., & Wislar, J. S. (2012). Response rates and nonresponse errors in surveys. Journal of the American Medical Association, 307(17), 1805-1806.
  • Keller, G. & Warrack, B. (2016). Statistics for management and economics, 9th ed. Australia: Thomson.
  • Lara, D., Garcia, S.G., Ellertson, C., Camlin, C. & Suarez, J. (2006). The measure of induced abortion levels in Mexico using random response technique. Sociaological Method & Research, 35(2), 279-301.
  • Meade, A.W., & Craig, S.B. (2012). Identifying careless responses in survey data. Psychological Methods, 17(3), 437-455.
  • Meyer, J.F., Fraust, K.A., Faust, D., Baker, A.M., & Cook, N.E. (2013). Careless and random responding on clinical and research measures in the addictions: A concerning problem and investigation of their detection. International Journal of Mental Health and Addiction, 7(3), 292-306.
  • Penwarden, R. (2013, August). How to avoid nonresponse bias. FluidSurveys. Retrieved from http://fluidsurveys.com/how-to-avoid-nonresponse-bias.
  • Reaser, J.M. (1975). A test of the forced-alternative random response questionnaire technique. Technical Report no. 75-9.
  • Rogelberg, S.G., Fisher, G.G., Maynard, D.C., Hakel, M.D., & Horvath, M. (2001). Attitudes toward surveys: Development of a measure and its relationship to respondent behavior. Organizational Research Methods, 4(1), 3-25.
  • Rousseau, B., & Ennis, J.M. (2013). Importance of correct instructions in the Tetrad test. Journal of Sensory Studies, 28(4), 264-269. doi: 10.1111/joss.12049
  • Spector, Paul E. (1992). Summated rating scale construction: An introduction. California: Sage Publications, Inc. p. 30-31.
  • Sriramatr, S., Berry, T.R., Rodgers, W., & Stolp, S. (2012). The effect of different response formats on ratings of exerciser stereotypes. Social Behavior and Personality, 40(10), 1655-1666.
  • Summers, G.F., & Hammonds, A.D. (1969). Toward a paradigm for respondent bias in survey research. Sociological Quarterly, 10(1), 113-121.
  • Tabachnick, B. G. & Fidel L. S. (2013). Using multivariate statistics, 6th ed. Boston: Pearson, p. 661.
  • Tatsuoka & Tatsuoka (as cited in Harnisch, 1981). Analysis of item response patterns: Consistency indices and their application to criterion-reference tests. Paper presented at the Annual Meeting of the American Educational Research Association, Long Angeles, April 13-17, 1981.
  • Thompson, B., Melancon, J.G., & Kier, F.J. (1998). Faking/random response scales for the PPSDQ-93 measure of Jungian personality types. Paper presented at the annual meeting of the Southwestern Psychological Association (new Orleans, L.A., April).
  • Warner, S. L. (as cited in Begin, Boivin & Bellerose, 1979). Sensitive data collection through the random response technique: Some improvements. Journal of Psychology, 101(1), 53-65.