Acceptability of Google Translate Machine Translation System in Translation from English into Kurdish
A Study on Evaluating Machine Translation Outputs
Abstract
The development of Machine Translation (MT) systems and their application in performing translation projects gave a crucial position to the evaluation of these systems’ outputs. Recently, the Google Translate MT system added the central accent of the Kurdish language to its language list. The current study is an attempt to evaluate the acceptability of the translated texts produced by the system. Different text typologies have been considered for the study's data. To evaluate the MT outputs, the Bilingual Evaluation Understudy (BLEU) evaluation model has been administered. The findings show that the performance of the understudy MT system in the translation of English into the Sorani accent of Kurdish is affected by some linguistic and technical hindrances, which in general affect the acceptability of translated text.
Downloads
References
Arnold, D, Sadler, L., & Humphreys, R.L. (1993). Evaluation: An assessment. Machine Translation, 8(1-2), 1-24.
Castilho, S (2016). Measuring Acceptability of Machine-Translated Enterprise Content. Ph.D. Thesis, Dublin City University.
Castilho, S., Doherty, S., Gaspari, F., & Moorkens, J. (2018). Approaches to Human and Machine Translation Quality Assessment. Translation Quality Assessment: From Principles to Practice. Berlin: Springer. p.9-38.
Chomsky, N. (1969). Aspects of the Theory of Syntax. Cambridge, MA: MIT Press.
De Beaugrande, R.A., & Dressler, W.U. (1981). Introduction to Text Linguistics. Vol. 1. London: Longman.
Dobrinkat, M. (2008). Domain Adaptation in Statistical Machine Translation Systems via User Feedback (Doctoral Dissertation, Master’s thesis, Helsinki University of Technology).
Drugan, J. (2013). Quality in Professional Translation: Assessment and Improvement. London: Bloomsbury.
Florence Reeder. (2001). Additional MT-Eval References. Technical Report, International Standards for Language Engineering, Evaluation Working Group. Available from: https://isscowww.unige.ch/projects/isle/taxonomy2
Gaspari, F., Almaghout, H., & Doherty, S. (2015). A survey of machine translation competencies: Insights for translation technology educators and practitioners. Perspectives Studies in Translatology, 23(3), 333-358.
Graham, Y. (2015). Improving the Evaluation of Machine Translation Quality Estimation. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Vol. 1. Long Papers. pp.1804-181.
Hovy, E.H. (1999). Toward finely differentiated evaluation metrics for machine translation. In: Proceedings of the Eagles Workshop on Standards and Evaluation, Pisa, Italy.
Kishore, P., Salim, R., Todd, W., John, H., & Florence, R. (2002). Corpus-based Comprehensive and Diagnostic MT Evaluation: Initial Arabic, Chinese, French, and Spanish results. In: Proceedings of Human Language Technology 2002, San Diego, CA.
Koehn, P (2010). Statistical Machine Translation, Cambridge: Cambridge University Press.
Lavie, A., & Agarwal, A. (2007). METEOR: An Automatic Metric for MT Evaluation with High Levels of Correlation with Human Judgments. In: Proceedings of the Workshop on Statistical Machine Translation, Prague. pp.228-231.
Rasouli, F. (2018). Assessment of machine translation output: A comparative study between human and automatic models. Cihan University-Erbil Scientific Journal, 2, 119-141.
Rasouli, F. (2022). The impact of developing short-term memory on the interpretation performance of students. Cihan University-Erbil Journal of Humanities and Social Sciences, 6(1), 64-68.
Snover, M., Dorr, B., Schwartz, R., Micciulla, L., & Makhoul, J. (2006). A Study of Translation Edit Rate with Targeted Human Annotation. In: Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers. pp.223-231.
Turian, J.P., Shen, L., & Melamed, I.D. (2003). Evaluation of machine translation and its evaluation. In: Proceedings of MT Summit IX, New Orleans. pp.386-393.
Copyright (c) 2024 Fereydoon Rasouli, Soma Soleimanzadeh, Keivan Seyyedi
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Authors who publish with this journal agree to the following terms:
1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License [CC BY-NC-ND 4.0] that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).