The Impact of Error Annotation on Post-Editing of Subtitles: An Investigation into Effort and Product

The Impact of Error Annotation on Post-Editing of Subtitles: An Investigation into Effort and Product

This paper aims to investigate the effect of error annotation on post-editing effort and post-edited product. The study also attempts to highlight the significance of quality evaluation, particularly error annotation, which, I believe, is a useful method for learning how to work with machine translation (MT). In order to accomplish these goals, ten translation students were divided into two groups—a control group and a treatment group—in an experimental study. The control group post-edited the machine-translated subtitles of an educational video while the treatment group performed a quality evaluation prior to the task of post-editing the same content. Temporal and technical effort data (Krings 2001) of students were gathered to measure whether there was a significant difference between the two groups. In addition, the end products were examined to see if quality evaluation had a different impact on the post-editing decisions of the treatment group compared to the control group. The results show that there is a significant difference in temporal effort between the two groups—the treatment group completing the post-editing task faster—and the control group expended more technical effort than the treatment group, though the difference was not significant. The treatment group also displayed a tendency to use MT and edit more efficiently than the control group.

___

  • Armstrong, Stephen, Andy Way, Colm Caffrey, and Marian Flanagan. 2006. “Improving the Quality of Automated DVD Subtitles via Example-Based Machine Translation.” In Proceedings of Translating and the Computer, 1–13. London: Aslib. https://aclanthology.org/2006.tc-1.9.
  • Brouns, Francis, Nicolás Serrano Martínez-Santos, Jorge Civera, Marco Kalz, and Alfons Juan. 2015. “Supporting Language Diversity of European MOOCs with the EMMA Platform.” In Proceedings of the European MOOC Stakeholder Summit 2015, edited by M. Lebrun, M. Ebner, I. de Waard, M. Gaebel, 157–165. https://research.ou.nl/en/publications/supporting-language-diversity-of-european-moocs-with-the-emma-pla.
  • Burchardt, Aljoscha, Arle Lommel, Lindsay Bywood, Kim Harris, and Maja Popović. 2016. “Machine Translation Quality in an Audiovisual Context.” Target 28 (2): 206–221. doi:10.1075/target.28.2.03bur.
  • Bywood, Lindsay, Panayota Georgakopoulou, and Thierry Etchegoyhen. 2017. “Embracing the Threat: Machine Translation as a Solution for Subtitling.” In “Translation of Economics and the Economics of Translation,” edited by Łucja Biel and Vilelmini Sosoni. Special Issue, Perspectives 25 (3): 492–508. doi:10.1080/0907676x.2017.1291695.
  • Castilho, Sheila, Federico Gaspari, Joss Moorkens, and Andy Way. 2017. “Integrating Machine Translation into MOOCs.” In Proceedings of EDULEARN17 Conference, edited by L. Gómez Chova, A. López Martínez, and I. Candel Torres, 9360–9365. Barcelona, Spain: IATID. doi:10.21125/edulearn.2017.0765.
  • Castilho, Sheila, Joss Moorkens, Federico Gaspari, Rico Sennrich, Andy Way, and Panayota Georgakopoulou. 2018. “Evaluating MT for Massive Open Online Courses.” In “Human Evaluation of Statistical and Neural Machine Translation,” edited by Andy Way and Mikel L. Forcada. Special Issue, Machine Translation 32 (3): 255–278. doi:10.1007/s10590-018-9221-y.
  • de Souza, Sheila C. M., Wilker Aziz, and Lucia Specia. 2011. “Assessing the Post-editing Effort for Automatic and Semi-Automatic Translation of DVD Subtitles.” In Proceedings of Recent Advances in Natural Language Processing, edited by Galia Angelova, Kalina Bontcheva, Ruslan Mitkov, and Nikolai Nikolov, 97–103. https://aclanthology.org/R11-1014.
  • Fernández-Torné, Anna, and Anna Matalama 2016. “Machine Translation in Audio Description? Comparing Creation, Translation and Post-editing Efforts.” SKASE Journal of Translation and Interpretation 9 (1): 64–87. http://www.skase.sk/Volumes/JTI10/pdf_doc/05.pdf.
  • Koponen, Maarit, Umut Sulubacak, Kaisa Vitikainen, and Jorg Tiedemann. 2020. “MT for Subtitling: User Evaluation of Post-editing Productivity.” In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, edited by André Martins, Helena Moniz, Sara Fumega, Bruno Martins, Fernando Batista, Luisa Coheur, Carla Parra, Isabel Trancoso, Marco Turchi, Arianna Bisazza, Joss Moorkens, Ana Guerberof, Mary Nurminen, Lena Marg, and Mikel L. Forcada, 115–124. Lisboa, Portugal: European Association for Machine Translation. https://aclanthology.org/2020.eamt-1.13.
  • Krings, Hans P. 2001. Repairing Texts: Empirical Investigations of Machine Translation Post- editing Processes. Edited by Geoffrey S. Koby. Kent: The Kent State University Press.
  • Martín-Mor, Adrià, and Pilar Sánchez-Gijón. 2016. “Machine Translation and Audiovisual Products: A Case Study.” The Journal of Specialised Translation, no. 26, 172–186. https://jostrans.org/issue26/art_martin.pdf.
  • Moorkens, Joss. 2018a. “Eye Tracking as a Measure of Cognitive Effort for Post-editing of Machine Translation.” In Eye Tracking and Multidisciplinary Studies on Translation, edited by Callum Walker and Federico M. Federici, 55–70. Amsterdam: John Benjamins. doi:10.1075/btl.143.04moo.
  • Moorkens, Joss. 2018b. “What to Expect from Neural Machine Translation: A Practical In-class Translation Evaluation Exercise.” The Interpreter and Translator Trainer 12 (4): 375–387. doi:10.1080/1750399X.2018.1501639.
  • Nunes Vieira, Lucas. 2015. “Cognitive Effort in Post-Editing of Machine Translation: Evidence from Eye Movements, Subjective Ratings, and Think-Aloud Protocols.” PhD diss., Newcastle University.
  • Nyberg, Eric, and Teruko Mitamura. 1997. “A Real Time MT System for Translating Broadcast Captions.” In Proceedings of the Sixth Machine Translation Summit, 51–57. https://aclanthology.org/1997.mtsummit-papers.2.
  • Öner, Işın, and Senem Öner Bulut. 2021. “Post-Editing Oriented Human Quality Evaluation of Neural Machine Translation in Translator Training: A Study on Perceived Difficulties and Benefits.” transLogos 4 (1): 100–124. doi:10.29228/transLogos.33.
  • Papineni, Kishore, Salim Rukos, Tod Ward, and Wei-Jing Zhu. 2002. “BLEU: A Method for Automatic Evaluation of Machine Translation.” In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 311–318. Philadelphia. doi:10.3115/1073083.1073135.
  • Popowich, Fred, Paul McFetridge, Davide Turcato, and Janine Toole. 2000. “Machine Translation of Closed Captions.” Machine Translation 15 (4): 311–341. https://www.jstor.org/stable/20060451.
  • Ruiz Costa-jussà, Marta, Lluis Formiga, Oriol Torrillas, Jordi Petit, and José Adrián Rodríguez Fonollosa. 2015. “A MOOC on Approaches to Machine Translation.” The International Review of Research in Open and Distributed Learning 16 (6): 174–205. doi:10.19173/irrodl.v16i6.2145.
  • Saldanha, Gabriela, and Sharon O’Brien. 2014. Research Methodologies in Translation Studies. New York: Routledge.
  • Silvestre-Cerdà, J. A., M. A. del Agua, G. Garcés, G. Gascó, A. Giménez, A. Martínez, A. Pérez, I. Sánchez, N. Serrano, R. Spencer, J. D. Valor, J. Andrés-Ferrer, J. Civera, A. Sanchis, and A. Juan. 2012. “transLectures.” In Online Proceedings of Advances in Speech and Language Technologies for Iberian Languages, IBERSPEECH ’12, Madrid, Spain. https://riunet.upv.es/handle/10251/37290.
  • Volk, Martin, Rico Sennrich, Christian Hardmeier, and Frida Tidström. 2010. “Machine Translation of TV Subtitles for Large Scale Production.” In Proceedings of the Second Joint EM+/CNGLWorkshop “Bringing MTto the User: Research on Integrating MTin the Translation Industry,” edited by Ventsislav Zhechev, 53–62. Denver, Colorado, USA: Association for Machine Translation in the Americas. doi:10.5167/uzh-36755.