SÖZLÜ ÇEVİRİDE SİSTEMATİK DEĞERLENDİRME TEKNİKLERİNİN KAPSAMLI BİR İNCELEMESİ

Sözlü çeviri eğitiminde değerlendirmenin öneminin kabul edilmesi ile, sözlü çeviride değerlendirme konusunun farklı bakış açıları ile geniş kapsamlı analizlerine daha fazla ihtiyaç ortaya çıkmaktadır. Bu sebeple, bu makale, sözlü çeviri performansının eğitsel açıdan etkili bir şekilde nasıl değerlendirileceğine ışık tutmak için sağlam metodolojilere dayanan pratik değerlendirme tekniklerini derinlemesine gözden geçirmeyi amaçlamaktadır. Bu amaçla öncelikle bu araştırma, ilgili alandaki başlıca kavramlara değinerek sözlü çeviride değerlendirmenin teorik temellerini sunmayı amaçlamaktadır. Bu bağlamda, bu metin sözlü çeviride değerlendirmenin ayrıntılı bir tanımlamasıyla başlamakta ve sürecin temel noktalarını, yani geçerlilik ve güvenilirliği ve amaca göre farklı değerlendirme türlerini detaylandırmaktadır. Daha sonra, bu inceleme yazısı, bütünsel değerlendirme teknikleriyle karşılaştırmalar yaparak analitik derecelendirme ölçeklerini incelemeyi hedeflemektedir. Son olarak, bu metin, ilgili alan-yazında sunulan bazı yenilikçi değerlendirme uygulamalarının, yani sözlü çeviride akran ve öz değerlendirme tekniklerinin, bu tekniklerle ilişkili farklı parametrelere değinerek, kapsamlı incelenmesine odaklanmaktadır. Bu çalışmadan çıkarılan sonuçlar, test geliştiricilerine ve sözlü çevirmen eğitmenlerine, sözlü çeviri performansını ölçmede etkili ve sağlam test tasarımları planlamada daha fazla bilgi edinmeleri yönünde fayda sağlayabilir.

A COMPREHENSIVE REVIEW OF SYSTEMATIC ASSESSMENT TECHNIQUES IN INTERPRETING

With the acknowledgment of the prominence of assessment in interpreting education, there appears a growing need for far-reaching analyses of the assessment issue in interpretation from different perspectives. Therefore, this article is intended for deeply reviewing the practical assessment techniques grounded in robust methodologies in order to cast light on how to assess the interpreting performance effectively from the educational standpoints. To this end, firstly, this research aims to present the theoretical underpinnings of the assessment in interpreting by addressing the major concepts in the relevant field. In this respect, this text starts with a detailed description of assessment in interpretation and elaborates the central points of the process, i.e., validity and reliability, and different assessment types by purpose. Then, this review paper has aimed at scrutinizing analytic rating scales by making comparisons with holistic assessment techniques. Finally, this text focuses on the thorough examination of some innovative assessment practices offered in the relevant literature, i.e., peer and self-assessment techniques in interpreting, by addressing different parameters in relation to these techniques. The conclusions drawn from this study might benefit test developers and interpreter trainers to gain further knowledge about planning effective and sound test designs in measuring the interpreting performance.

___

  • Altman, J. (1994) Error analysis in the teaching of simultaneous interpretation: A pilot study. In S. Lambert & B. Moser-Mercer (Eds), Bridging the Gap. Empirical research in simultaneous interpretation (pp. 25–38). Amsterdam/Philadelphia: John Benjamins.
  • Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford: Oxford University Press.
  • Bahman, L. F. (2004). Statistical analysis for language assessment. Cambridge: Cambridge University Press.
  • Bao, C. (2015). Pedagogy. In R. Jourdenais & H. Mikkelson (Eds.), The Routledge handbook of interpreting (pp. 400-416). New York, NY: Routledge.
  • Barik, H. C. (1971). A description of various types of omissions, additions and errors of translation encountered in simultaneous interpretation. Meta, 16(4), 199-210.
  • Bell, B., & Cowie, B. (2001). The characteristics of formative assessment in science education. Science Education, 85(5), 536–553.
  • Bühler, H. (1986). Linguistic (semantic) and extra-linguistic (pragmatic) criteria for the evaluation of conference interpretation and interpreters. Multilingua, 5(4), 231–235.
  • Campbell, S. & Hale, S. (2003). Chapter 15. Translation and Interpreting Assessment in the Context of Educational Measurement. In G. Anderman & M. Rogers (Eds.), Translation Today (pp. 205-224). Bristol, Blue Ridge Summit: Multilingual Matters. https://doi.org/10.21832/9781853596179-017
  • Cheung, A. K. -F. (2007). The effectiveness of summary training in consecutive interpreting (CI) delivery. Forum, 5(2), 1–23.
  • Child, D. (2004). Psychology and the Teacher (7th ed.). London, New York: Continuum.
  • Choi, J. Y. (2006). Metacognitive evaluation method in consecutive interpretation for novice learners. Meta, 51(2), 273–283. https://doi.org/10.7202/013256ar
  • Clifford, A. (2001). Discourse theory and performance-based assessment: Two tools for professional interpreting. Meta, 46 (2), 365–378. https://doi.org/10.7202/002345ar
  • Deysel, E. (2018). Self-assessment by computer-assisted interpreter training (CAIT) for practicing interpreters: parliament as a case study (Unpublished Master’s Thesis). Stellenbosch University, Stellenbosch, South Africa.
  • Doğan, A., Ribas, M. A., Mora-Rubio, B. (2009). Metacognitive tools in interpreting training: A pilot study. Hacettepe Üniversitesi Edebiyat Fakültesi Dergisi (Hacettepe University Journal of Faculty of Letters), 26(1), 69-84.
  • Fowler, Y. (2007). Formative Assessment: Using Peer and Self-Assessment in Interpreter Training. In C. Wadensjö, B. E. Dimitrova, & A-L. Nilsson (Eds.), The Critical Link 4: Professionalisation of Interpreting in the Community, (pp. 253–262). Amsterdam: John Benjamins.
  • Gile, D. (1995). Fidelity assessment in consecutive interpretation: An experiment. Target, 7(1), 151–164.
  • Gile, D. (2001). L’évaluation de la qualité de l’interprétation en cours deformation. Meta, 46(2), 379–393. https://doi.org/10.7202/002890ar
  • Gipps, C. (1994). Beyond testing: Towards a theory of educational assessment. London: Falmer Press.
  • Han, C. (2015). Investigating rater severity/leniency in interpreter performance testing: A multifaceted Rasch measurement approach. Interpreting, 17(2), 255-283.
  • Han, C. (2017). Using Analytic Rating Scales to Assess English/Chinese Bidirectional Interpretation: A Longitudinal Rasch Analysis of Scale Utility and Rater Behavior. Linguistica Antverpiensia New Series –Themes in Translation Studies 16, 196–215.
  • Han, C. (2018a). A longitudinal quantitative investigation into the concurrent validity of self and peer assessment applied to English-Chinese bi-directional interpretation in an undergraduate interpreting course. Studies in Educational Evaluation, 58, 187–196. https://doi.org/10.1016/j.stueduc.2018.01.001
  • Han, C. (2018b). Latent trait modelling of rater accuracy in formative peer assessment of English-Chinese consecutive interpreting. Assessment & Evaluation in Higher Education 43(6): 979– 994. https://doi.org/10.1080/02602938.2018.1424799
  • Han, C. (2018c). Using rating scales to assess interpretation: Practices, problems and prospects. Interpreting, 20(1), 59 – 95. https://doi.org/10.1075/intp.00003.han
  • Han, C. (2019). Conceptualizing and operationalizing a formative assessment model for English-Chinese consecutive interpreting: a case study in an undergraduate interpreting course. In Huertas-Barros, E., Vandepitte, S., & Iglesias-Fernández, E. (Eds.), Quality Assurance and Assessment Practices in Translation and Interpreting (pp. 89-111). IGI Global. DOI: 10.4018/978-1-5225-5225-3.ch004
  • Han, C. & Fan, Q. (2020). Using self-assessment as a formative assessment tool in an English-Chinese interpreting course: student views and perceptions of its utility. Perspectives, 28(1), 109-125. https://doi.org/10.1080/0907676X.2019.1615516
  • Han, C. & Riazi, M. (2018) The accuracy of student self-assessments of English-Chinese bidirectional interpretation: a longitudinal quantitative study. Assessment & Evaluation in Higher Education, 43(3), 386-398, https://doi.org/10.1080/02602938.2017.1353062
  • Han, C. & Zhao, X. (2020): Accuracy of peer ratings on the quality of spoken-language interpreting. Assessment & Evaluation in Higher Education, https://doi.org/10.1080/02602938.2020.1855624
  • Hatim, B., & Mason, I. (1997). The Translator as Communicator. London & New York: Routledge.
  • Iglesias Fernández, E. (2011). Under examination. do all interpreting examiners use the same criteria? The Linguist, 50(2), 12-13.
  • Iglesias Fernández, E. (2013). Unpacking delivery criteria in interpreting quality assessment. In D. Tsagari & R. van Deemter (Eds.), Assessment issues in language, translation and interpreting (pp.51- 56). Frankfurt: Peter Lang.
  • Kalina, S. (2005). Quality assurance for interpreting processes. Meta, 50 (2), 768–784.
  • Kurz, I. (1989). Conference interpreting: User expectations. In D. L. Hammond (Ed.), Coming of age. Proceedings of the 30th annual conference of the American translators association (pp. 143-148). Medford, NJ: Learned Information.
  • Kurz, I. (2001). Conference interpreting: Quality in the ears of the user. Meta, 46 (2), 394–409.
  • Lee, J. (2008). Rating scales for interpreting performance assessment. The Interpreter and Translator Trainer, 2(2), 165–184. https://doi.org/10.1080/1750399X.2008.10798772
  • Lee, S. -B. (2015). Developing an analytic scale for assessing undergraduate students’ consecutive interpreting performances. Interpreting, 17(2), 226–254. https://doi.org/10.1075/intp.17.2.04lee
  • Lee, S.-B. (2017). University students’ experience of ‘scale-referenced’ peer assessment for a consecutive interpreting examination. Assessment & Evaluation in Higher Education, 42 (7), 1015–1029. doi:10.1080/02602938.2016.1223269
  • Lee, S.-B. (2019). Scale-referenced, summative peer assessment in undergraduate interpreter training: self-reflection from an action researcher. Educational Action Research, 27(2), 152-172.
  • Lee, Y.-H. (2011). Comparing self-assessment and teacher’s assessment in interpreter training. T&I Review, 1, 87–111.
  • Lee, Z. (2015). The reflection and self-assessment of student interpreters through logbooks: a case study (Unpublished Doctoral Dissertation). Heriot-Watt University, Edinburgh.
  • Niska, H. (2005). Training interpreters: Programmes, curricula, practice. In M. Tennent (ed.), Training for the New Millennium: Pedagogies for Translation and Interpreting (pp. 36–64). Amsterdam/Philadelphia: John Benjamins.
  • Orlando, M. (2010). Digital pen technology and consecutive interpreting: another dimension in note-taking training and assessment. The Interpreters' Newsletter, 15, 71- 86.
  • Postigo Pinazo, E. (2008). Self-assessment in teaching interpreting. TTR (Traduction, terminologie, rédaction), 21(1), 173–209. https://doi.org/10.7202/029690ar
  • Pöchhacker, F. (2001). Quality assessment in conference and community interpreting. Meta, 46(2), 410–425. https://doi.org/10.7202/003847ar
  • Pöchhacker, F. (2004). Introducing interpreting studies. Shanghai: Shanghai Foreign Language Education Press.
  • Riccardi, A. (2002). Evaluation in interpreting: Macrocriteria and microcriteria. In E. Hung (Ed.), Teaching Translation and Interpreting 4 (pp. 115-126). Amsterdam & Philadelphia: John Benjamins.
  • Roat, C. E. (2006). Certification of health care interpreters in the United States: A primer, a status report and considerations for national certification. Los Angeles, CA: The California Endowment.
  • Sawyer, D. B. (2004). Fundamental aspects of interpreter education: Curriculum and assessment. Amsterdam: John Benjamins. https://doi.org/10.1075/btl.47
  • Schjoldager, A. (1996). Assessment of simultaneous interpreting. In C. Dollerup & V. Appel (Eds.), Teaching Translation and Interpreting 3: New Horizons (pp. 187–195). Amsterdam: John Benjamins.
  • Setton, R. & Dawrant, A. (2016). Conference interpreting: a trainer’s guide. Amsterdam/Philadelphia: John Benjamins. https://doi.org/10.1075/btl.120
  • Shlesinger, M. (1997). Quality in Simultaneous Interpreting. In Y. Gambier, D. Gile & C. Taylor (Eds.), Conference Interpreting: Current Trends in Research (pp. 123-131). Amsterdam & Philadelphia: John Benjamins.
  • Su, W. (2019). Interpreting quality as evaluated by peer students. The Interpreter and Translator Trainer, 13(2), 177–189. doi:10.1080/1750399X.2018.1564192
  • Topping, K. 1998. Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249–276. doi:10.3102/00346543068003249
  • Topping, K. (2009). Peer Assessment. Theory into Practice, 48(1), 20–27. doi:10.1080/00405840802577569.
  • Wang, J. -H., Napier, J., Goswell, D. & Carmichael, A. (2015). The design and application of rubrics to assess signed language interpreting performance. The Interpreter and Translator Trainer, 9(1), 83–103. https://doi.org/10.1080/1750399X.2015.1009261
  • Wu, S. C. (2010). Assessing simultaneous interpreting: A study on test reliability and examiners’ assessment behavior (Unpublished Doctoral Dissertation). Newcastle University, the UK.