Acceptability of Google Translate Machine Translation System in Translation from English into Kurdish

A Study on Evaluating Machine Translation Outputs

Keywords: Acceptability of MT Output, Automatic Evaluation Systems, Kurdish Language (Sorani)

Abstract

The development of Machine Translation (MT) systems and their application in performing translation projects gave a crucial position to the evaluation of these systems’ outputs. Recently, the Google Translate MT system added the central accent of the Kurdish language to its language list. The current study is an attempt to evaluate the acceptability of the translated texts produced by the system. Different text typologies have been considered for the study's data. To evaluate the MT outputs, the Bilingual Evaluation Understudy (BLEU) evaluation model has been administered. The findings show that the performance of the understudy MT system in the translation of English into the Sorani accent of Kurdish is affected by some linguistic and technical hindrances, which in general affect the acceptability of translated text.

Downloads

Download data is not yet available.

Author Biographies

Fereydoon Rasouli, Department of Translation, Cihan University-Erbil, Kurdistan Region, Iraq
Fereydoon Rasouli Has been Teaching as an Assistant Lecturer at Cihan University-Erbil, Department of Translation since 2014 . He has a Master degree in Translation Studies from Kharazmi University of Tehran-Iran. Mr. Rasouli's main areas of research are Machine Translation, Translation and Culture, Interpretation, and ELT.   
Soma Soleimanzadeh, Department of Computer Science, Cihan University-Erbil, Kurdistan Region, Iraq

Soma Soleimanzadeh received a master's degree in IT Engineering from Iran University of Science and Technology and a bachelor's degree in the same major from University of Tabriz. She is an assistant lecturer in Cihan University-Erbil, Department of Computer Science.

Keivan Seyyedi, Department of Translation, Cihan University-Erbil, Kurdistan Region, Iraq

Keivan Seyyedi is an assistant professor at the Department of Translation, Cihan University-Erbil, Kurdistan Region, Iraq. His research interest is translation.

References

Arnold, D, Sadler, L., & Humphreys, R.L. (1993). Evaluation: An assessment. Machine Translation, 8(1-2), 1-24.

Castilho, S (2016). Measuring Acceptability of Machine-Translated Enterprise Content. Ph.D. Thesis, Dublin City University.

Castilho, S., Doherty, S., Gaspari, F., & Moorkens, J. (2018). Approaches to Human and Machine Translation Quality Assessment. Translation Quality Assessment: From Principles to Practice. Berlin: Springer. p.9-38.

Chomsky, N. (1969). Aspects of the Theory of Syntax. Cambridge, MA: MIT Press.

De Beaugrande, R.A., & Dressler, W.U. (1981). Introduction to Text Linguistics. Vol. 1. London: Longman.

Dobrinkat, M. (2008). Domain Adaptation in Statistical Machine Translation Systems via User Feedback (Doctoral Dissertation, Master’s thesis, Helsinki University of Technology).

Drugan, J. (2013). Quality in Professional Translation: Assessment and Improvement. London: Bloomsbury.

Florence Reeder. (2001). Additional MT-Eval References. Technical Report, International Standards for Language Engineering, Evaluation Working Group. Available from: https://isscowww.unige.ch/projects/isle/taxonomy2

Gaspari, F., Almaghout, H., & Doherty, S. (2015). A survey of machine translation competencies: Insights for translation technology educators and practitioners. Perspectives Studies in Translatology, 23(3), 333-358.

Graham, Y. (2015). Improving the Evaluation of Machine Translation Quality Estimation. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Vol. 1. Long Papers. pp.1804-181.

Hovy, E.H. (1999). Toward finely differentiated evaluation metrics for machine translation. In: Proceedings of the Eagles Workshop on Standards and Evaluation, Pisa, Italy.

Kishore, P., Salim, R., Todd, W., John, H., & Florence, R. (2002). Corpus-based Comprehensive and Diagnostic MT Evaluation: Initial Arabic, Chinese, French, and Spanish results. In: Proceedings of Human Language Technology 2002, San Diego, CA.

Koehn, P (2010). Statistical Machine Translation, Cambridge: Cambridge University Press.

Lavie, A., & Agarwal, A. (2007). METEOR: An Automatic Metric for MT Evaluation with High Levels of Correlation with Human Judgments. In: Proceedings of the Workshop on Statistical Machine Translation, Prague. pp.228-231.

Rasouli, F. (2018). Assessment of machine translation output: A comparative study between human and automatic models. Cihan University-Erbil Scientific Journal, 2, 119-141.

Rasouli, F. (2022). The impact of developing short-term memory on the interpretation performance of students. Cihan University-Erbil Journal of Humanities and Social Sciences, 6(1), 64-68.

Snover, M., Dorr, B., Schwartz, R., Micciulla, L., & Makhoul, J. (2006). A Study of Translation Edit Rate with Targeted Human Annotation. In: Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers. pp.223-231.

Turian, J.P., Shen, L., & Melamed, I.D. (2003). Evaluation of machine translation and its evaluation. In: Proceedings of MT Summit IX, New Orleans. pp.386-393.

Published
2024-01-01
How to Cite
Rasouli, F., Soleimanzadeh, S., & Seyyedi, K. (2024). Acceptability of Google Translate Machine Translation System in Translation from English into Kurdish. Cihan University-Erbil Journal of Humanities and Social Sciences, 8(1), 7-14. https://doi.org/10.24086/cuejhss.v8n1y2024.pp7-14
Section
Articles