Evaluating Intelligibility in Human Translation and Machine Translation

Norwati Md Yusof, Saadiyah Darus, Mohd Juzaiddin Ab Aziz


Research in automated translation mostly aims to develop translation systems to further enhance the transfer of knowledge and information. This need of transfer has brought machine translation (MT) to show major steps in translation software development and encourages further research in various MT related areas. However, there have been no focused investigations of criteria for evaluation particularly evaluation that considers human evaluators and the reconciliation of human translation (HT) and MT. Thus, focusing on two attributes for evaluation, namely Accuracy and Intelligibility, a study was conducted to investigate translation evaluation criteria for content and language transfer through reconciliation of HT and MT evaluation based on human evaluators’ perception. The study focused on human evaluators’ expectation of range of criteria for HT and MT under the two attributes and the evaluation was tested on a machine system to observe the system’s performance in terms of Accuracy and Intelligibility.  This paper reports the range of criteria to evaluate translation in terms of Intelligibility as expected by human evaluators in HT and MT in terms of content and language transfer. The study uses a mixed method approach with soft data and hard data collection.  The results demonstrate that the range of each criteria identified for content evaluation in HT is expected to be higher than in MT. The implications of the study are described to provide an understanding of evaluation for human and automated translation in terms of Intelligibility.


Keywords: criteria; evaluation; intelligibility; human translation; machine translation 

Full Text:



Arango-Keith, F. & Koby, G. S. (2003). Translator Training evaluation and the needs of industry quality assessment. In Brian James Baer & Geoffrey S. Koby (Eds.). Beyond the Ivory Tower: Rethinking Translation Pedagogy. Scholarly Monograph Series. Volume XII. American Translators Association.

Brunette, L. (2000). Towards a Terminology for Translation Quality Assessment. The Translator. Vol. 6(2), 169-182.

Carbonell J. G. & Tomita. M. (2003). New Approaches in Machine Translation. Proceedings of the Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages,

Colgate University, Hamilton, New York, August 14-16, 2003

Carroll, J. B. (1966). An Experiment in Evaluating the Quality of Translation. Mechanical Translation and Computational Linguistics. Vol. 9(3-4), 55-66.

Cary, E. & Jumpbelt, R.W. (Eds.) (1963). Quality in Translation. U.K: Pergamon Press.

Colina, S. (2003). Translation Teaching: From Research to the classroom. A handbook for Teachers. Boston: McGraw Hill.

Creswell J. W. (2009). Research Designing and Conducting Mixed Methods Research: Qualitative, Quantitative, and Mixed Methods Approaches. London / Thousand Oaks, CA: Sage Publication.

Creswell J.W. & Plano, C.V. (2007). Research Design and Conducting Mixed Methods Research. London/ Thousand Oaks, CA: Sage Publication.

Doherty, S. & Kenny, D. (2014). The Design and Evaluation of a Statistical Machine Translation Syllabus for Translation Students. The Interpreter and Translator Trainer. Vol. 8(2), 299-315.

Fakharzadeh, M. & Mahdavi, B. (2017). Full Sentence vs. Substitutable Defining Formats: A Study of Translation Equivalents. 3L: The Southeast Asian Journal of English Language Studies. Vol. 23(2), 167-179.

Hasuria Che Omar. (1998). Model Cadangan untuk Menganalsis dan Menilai Terjemahan. Jurnal Dewan Bahasa. 78-96. Kuala Lumpur: Dewan Bahasa dan Pustaka.

House, J. (1997). Translation Quality assessment: A Model Revisited. Tubingen: Gunter Narr.

House, J. (1981). A Model for Translation Quality Assessment. 2nd Ed (1 ed. 1997) Tubingen: Narr. House

Hovy, E.H, Margaret, K. & Andrei P. (2002). Principles of Context-Based Machine Translation Evaluation. Machine Translation. Vol. 17(1), 43-75.

Kuhn, R. & Isabelle (2009). MT: The Current Research Landscape. Institute for Information Technology National Research Council, Canada.

Lauscher, S. (2000). Translation Quality of Assessment: Where Can Theory and Practice Meet? The Translator. Vol. 6(2), 149-168.

Melby, K. A. (2014). Can Translation Quality be Defined? Yes, But Not Absolutely. Mans vs. Machine- Poceeding of the XX FIT World Congress, Berlin.

Morse, J. M. (1991). Approaches to qualitative-quantitative methodological triangulation. Nursing Research. Vol. 10(1), 120-123.

Nord, C. (1997). “A Functional Typology of Translation”. In Trosborg, Anna (Ed.). Text Typology and Translation. Amsterdam/Philadelphia: John Benjamin.

Reiss, K. (2000). “Type, Kind and Individuality of Text: Decision Making In Translation” Venuti, Lawrence (Ed.). The Translation Studies Reader. London/New York: Routledge.

Reiss, K. (1977/ 1989). Text- types, Translation types and Translation Assessment. In A. Chesterman (Ed.). Reading in Translation Theory (pp. 105 – 115). Helsinki: Finn Lectura.

Schaffner, C. (1998). Skopos theory. Baker. M. (Ed.), Routledge Encyclopedia of Translation Studies. London: Routledge.

Straight, H. S. (2002). The Difference Between Assessment and Evaluation. Retreive December 2010, from http://www2.binghamton.edu/academics/provost/document/assessment- eveluation-straight.ppt.

Susniene, D. & Virbickaite, R. (2012). Translation and Definition of Term Evaluation and Assessment. Studies About Languages. Vol. 20, 85-90.

Tashakkari, A. & Teddlie, C. (1998). Mixed Methodology: Combining Qualitative and Quantitative Approaches. Thousand Oaks. CA: Sage.

Tomita, M. (1992). Application of the TOEFL Test to the evaluation of Japanese- English MT. Proceeding of the AMTA Workshop on MT Evaluation. San Diego. CA.

Tomita, M., Masako, S., Tsutsumi, J., Matsumura, M. & Yoshikawa, Y. (1993). Evaluation of MT Systems by TOEFL. Proceedings of the 5th International Conference on Theoretical and Methodology Issues in Machine Translation: MT in the Next Generation. (TMI-93) 14-16 July 1993. Kyoto. Japan. 252-265.

DOI: http://dx.doi.org/10.17576/3L-2017-2304-19


  • There are currently no refbacks.




eISSN : 2550-2247

ISSN : 0128-5157