Machine Translation Output for Arabic-to-English Translation of Legal Texts: A Comparative Study in AI Tools

Document Type : Original Article

Author

Legal Translator at Julfar Consultancy Services Misr International University

Abstract

The effectiveness of machine translation (MT) in the legal domain requires close evaluation, using appropriate quality-assessment models. This study assesses the translation quality of two advanced MT systems—Gemini and ChatGPT—by analyzing their English-translations for an Arabic Memorandum of Association. The TAUS Dynamic Quality Framework (DQF) was adopted as the primary evaluation metric, with a focus on error typology and frequency to measure translation performance. The research adopts a quantitative approach, examining the outputs in terms of accuracy and fluency, while identifying and categorizing errors. The findings indicate distinct tendencies in each system: ChatGPT often omits source-text content, while Gemini exhibits a tendency toward over-translation. The results also emphasizes the importance of incorporating domain-specific training data and tailored quality-assurance (QA) modules to improve MT output in legal contexts. This study should contributes to the growing body of literature on MT evaluation by offering insight into the strengths, limitations, and error patterns of emerging AI-powered translation tools

Keywords