Hostname: page-component-78c5997874-v9fdk Total loading time: 0 Render date: 2024-11-15T19:23:04.023Z Has data issue: false hasContentIssue false

Neural machine translation of low-resource languages using SMT phrase pair injection

Published online by Cambridge University Press:  17 June 2020

Sukanta Sen*
Affiliation:
Indian Institute of Technology Patna, India
Mohammed Hasanuzzaman
Affiliation:
ADAPT Centre, Dublin City University, Ireland
Asif Ekbal
Affiliation:
Indian Institute of Technology Patna, India
Pushpak Bhattacharyya
Affiliation:
Indian Institute of Technology Patna, India
Andy Way
Affiliation:
ADAPT Centre, Dublin City University, Ireland
*
*Corresponding author. E-mail: [email protected]

Abstract

Neural machine translation (NMT) has recently shown promising results on publicly available benchmark datasets and is being rapidly adopted in various production systems. However, it requires high-quality large-scale parallel corpus, and it is not always possible to have sufficiently large corpus as it requires time, money, and professionals. Hence, many existing large-scale parallel corpus are limited to the specific languages and domains. In this paper, we propose an effective approach to improve an NMT system in low-resource scenario without using any additional data. Our approach aims at augmenting the original training data by means of parallel phrases extracted from the original training data itself using a statistical machine translation (SMT) system. Our proposed approach is based on the gated recurrent unit (GRU) and transformer networks. We choose the Hindi–English, Hindi–Bengali datasets for Health, Tourism, and Judicial (only for Hindi–English) domains. We train our NMT models for 10 translation directions, each using only 5–23k parallel sentences. Experiments show the improvements in the range of 1.38–15.36 BiLingual Evaluation Understudy points over the baseline systems. Experiments show that transformer models perform better than GRU models in low-resource scenarios. In addition to that, we also find that our proposed method outperforms SMT—which is known to work better than the neural models in low-resource scenarios—for some translation directions. In order to further show the effectiveness of our proposed model, we also employ our approach to another interesting NMT task, for example, old-to-modern English translation, using a tiny parallel corpus of only 2.7K sentences. For this task, we use publicly available old-modern English text which is approximately 1000 years old. Evaluation for this task shows significant improvement over the baseline NMT.

Type
Article
Copyright
© The Author(s), 2020. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Artetxe, M., Labaka, G. and Agirre, E. (2018). Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Association for Computational Linguistics, pp. 36323642.CrossRefGoogle Scholar
Arthur, P., Neubig, G. and Nakamura, S. (2016). Incorporating discrete translation lexicons into neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 15571567.CrossRefGoogle Scholar
Bahdanau, D., Cho, K. and Bengio, Y. (2015). Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representation (ICLR).Google Scholar
Bojar, O., Chatterjee, R., Federmann, C., Graham, Y., Haddow, B., Huck, M., Yepes, A.J., Koehn, P., Logacheva, V., Monz, C., Negri, M., Névéol, A., Neves, M., Popel, M., Post, M., Rubino, R., Scarton, C., Specia, L., Turchi, M., Verspoor, K. and Zampieri, M., (2016). Findings of the 2016 conference on machine translation. In ACL 2016 First Conference on Machine Translation (WMT16). The Association for Computational Linguistics, pp. 131198.CrossRefGoogle Scholar
Bojar, O., Diatka, V., Rychlý, P., Stranák, P., Suchomel, V., Tamchyna, A. and Zeman, D. (2014). Hindencorp-hindi-english and hindi-only corpus for machine translation. In LREC, pp. 35503555.Google Scholar
Cho, K., Van Merriënboer, B., Bahdanau, D. and Bengio, Y. (2014). On the properties of neural machine translation: Encoder-decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pp. 103111.CrossRefGoogle Scholar
Crego, J., Kim, J., Klein, G., Rebollo, A., Yang, K., Senellart, J., Akhanov, E., Brunelle, P., Coquard, A., Deng, Y., Enoue, S., Geiss, C., Johanson, J., Khalsa, A., Khiari, R., Ko, B., Kobus, C., Lorieux, J., Martins, L., Nguyen, D.-C., Priori, A., Riccardi, T., Segal, N., Servan, C., Tiquet, C., Wang, B., Yang, J., Zhang, D., Zhou, J. and Zoldan, P. et al. (2016). Systran’s pure neural machine translation systems. arXiv preprint arXiv:1610.05540.Google Scholar
Fadaee, M., Bisazza, A. and Monz, C. (2017). Data augmentation for low-resource neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Vancouver, Canada. Association for Computational Linguistics, pp. 567573.CrossRefGoogle Scholar
Feng, Y., Zhang, S., Zhang, A., Wang, D. and Abel, A. (2017). Memory-augmented neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pp. 13901399.CrossRefGoogle Scholar
Forcada, M.L. and Ñeco, R.P. (1997). Recursive hetero-associative memories for translation. In International Work-Conference on Artificial Neural Networks. Springer, pp. 453462.CrossRefGoogle Scholar
Gulcehre, C., Firat, O., Xu, K., Cho, K. and Bengio, Y. (2017). On integrating a language model into neural machine translation. Computer Speech & Language 45, 137148.CrossRefGoogle Scholar
Guzmán, F., Chen, P.-J., Ott, M., Pino, J., Lample, G., Koehn, P., Chaudhary, V. and Ranzato, M. (2019). Two new evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English. arXiv preprint arXiv:1902.01382.Google Scholar
He, W., He, Z., Wu, H. and Wang, H. (2016). Improved neural machine translation with smt features. In Thirtieth AAAI Conference on Artificial Intelligence.Google Scholar
Heafield, K. (2011). Kenlm: faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation. Association for Computational Linguistics, pp. 187197.Google Scholar
Hieber, F., Domhan, T., Denkowski, M., Vilar, D., Sokolov, A., Clifton, A. and Post, M. (2017). Sockeye: a toolkit for neural machine translation. arXiv preprint arXiv:1712.05690.Google Scholar
Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural Computation 9(8), 17351780.CrossRefGoogle ScholarPubMed
Jha, G.N. (2010). The tdil program and the indian langauge corpora intitiative (ilci). In LREC.Google Scholar
Junczys-Dowmunt, M., Dwojak, T. and Hoang, H. (2016). Is neural machine translation ready for deployment? a case study on 30 translation directions. In Proceedings of the International Workshop on Spoken Language Translation (IWSLT).Google Scholar
Kalchbrenner, N. and Blunsom, P. (2013). Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 17001709.Google Scholar
Kingma, D.P. and Ba, J. (2015). Adam: a method for stochastic optimization. In International Conference on Learning Representation (ICLR).Google Scholar
Kneser, R. and Ney, H. (1995). Improved backing-off for m-gram language modeling. In 1995 International Conference on Acoustics, Speech, and Signal Processing, 1995. ICASSP-95, vol. 1. IEEE, pp. 181184.CrossRefGoogle Scholar
Koehn, P. (2004). Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing.Google Scholar
Koehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A. and Herbst, E. (2007). Moses: open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions. Association for Computational Linguistics, pp. 177180.CrossRefGoogle Scholar
Koehn, P. and Knowles, R. (2017). Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, Vancouver. Association for Computational Linguistics, pp. 2839.CrossRefGoogle Scholar
Koehn, P., Och, F.J. and Marcu, D. (2003). Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1. Association for Computational Linguistics, pp. 4854.CrossRefGoogle Scholar
Kunchukuttan, A., Mehta, P. and Bhattacharyya, P. (2018). The IIT Bombay English-Hindi parallel corpus. In Calzolari N., Choukri K., Cieri C., Declerck T., Goggi S., Hasida K., Isahara H., Maegaard B., Mariani J., Mazo H., Moreno A., Odijk J., Piperidis S. and Tokunaga T. (eds), Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Resources Association (ELRA).Google Scholar
Lample, G. and Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.Google Scholar
Lample, G., Ott, M., Conneau, A., Denoyer, L. and Ranzato, M. (2018). Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Association for Computational Linguistics, pp. 50395049.Google Scholar
Niehues, J., Cho, E., Ha, T.-L. and Waibel, A. (2016). Pre-translation for neural machine translation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, Osaka, Japan. The COLING 2016 Organizing Committee, pp. 18281836.Google Scholar
Papineni, K., Roukos, S., Ward, T. and Zhu, W.-J. (2002). BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, Philadelphia, Pennsylvania, pp. 311318.Google Scholar
Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L. and Lerer, A. (2017). Automatic differentiation in pytorch. In NIPS 2017 Autodiff Workshop: The Future of Gradient-based Machine Learning Software and Techniques.Google Scholar
Ramachandran, P., Liu, P. and Le, Q. (2017). Unsupervised pretraining for sequence to sequence learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark. Association for Computational Linguistics, pp. 383391.CrossRefGoogle Scholar
Ren, S., Zhang, Z., Liu, S., Zhou, M. and Ma, S. (2019). Unsupervised neural machine translation with smt as posterior regularization. arXiv preprint arXiv:1901.04112.Google Scholar
Sen, S., Hasanuzzaman, M., Ekbal, A., Bhattacharyya, P. and Way, A. (in press). Take help from elder brother: old to modern english nmt with phrase pair feedback. In Proceedings of the International Conference on Computational Linguistics and Intelligent Text Processing.Google Scholar
Sennrich, R., Firat, O., Cho, K., Birch, A., Haddow, B., Hitschler, J., Junczys-Dowmunt, M., Läubli, S., Miceli Barone, A.V., Mokry, J. and Nadejde, M. (2017). Nematus: a toolkit for neural machine translation. In Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pp. 6568.CrossRefGoogle Scholar
Sennrich, R., Haddow, B. and Birch, A. (2016a). Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7–12, 2016, Berlin, Germany.CrossRefGoogle Scholar
Sennrich, R., Haddow, B. and Birch, A. (2016b). Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany. Association for Computational Linguistics, pp. 17151725.CrossRefGoogle Scholar
Song, K., Zhang, Y., Yu, H., Luo, W., Wang, K. and Zhang, M. (2019). Code-switching for enhancing NMT with pre-specified translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota. Association for Computational Linguistics, pp. 449459.Google Scholar
Sutskever, I., Vinyals, O. and Le, Q.V. (2014). Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pp. 31043112.Google Scholar
Tang, Y., Meng, F., Lu, Z., Li, H. and Yu, P.L. (2016). Neural machine translation with external phrase memory. arXiv preprint arXiv:1606.01792.Google Scholar
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł. and Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems, pp. 59986008.Google Scholar
Wang, X., Lu, Z., Tu, Z., Li, H., Xiong, D. and Zhang, M. (2017). Neural machine translation advised by statistical machine translation. In Thirty-First AAAI Conference on Artificial Intelligence.Google Scholar
Wang, X., Pham, H., Dai, Z. and Neubig, G. (2018). SwitchOut: an efficient data augmentation algorithm for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Association for Computational Linguistics, pp. 856861.CrossRefGoogle Scholar
Wang, X., Tu, Z. and Zhang, M. (2018). Incorporating statistical machine translation word knowledge into neural machine translation. IEEE/ACM Transactions on Audio, Speech, and Language Processing 26(12), 22552266.CrossRefGoogle Scholar
Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., Klingner, J., Shah, A., Johnson, M., Liu, X., Kaiser, U., Gouws, S., Kato, Y., Kudo, T., Kazawa, H., Stevens, K., Kurian, G., Patil, N., Wang, W., Young, C., Smith, J., Riesa, J., Rudnick, A., Vinyals, O., Corrado, G., Hughes, M. and Dean, J. (2016). Google’s neural machine translation system: bridging the gap between human and machine translation. CoRR .Google Scholar
Zhang, J. and Zong, C. (2016). Exploiting source-side monolingual data in neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 15351545.CrossRefGoogle Scholar
Zhang, Z., Liu, S., Li, M., Zhou, M. and Chen, E. (2018). Joint training for neural machine translation models with monolingual data. In Thirty-Second AAAI Conference on Artificial Intelligence.Google Scholar
Zhao, Y., Wang, Y., Zhang, J. and Zong, C. (2018). Phrase table as recommendation memory for neural machine translation. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13–19, 2018, Stockholm, Sweden, pp. 46094615.CrossRefGoogle Scholar
Zoph, B., Yuret, D., May, J. and Knight, K. (2016). Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pp. 1568–1575.CrossRefGoogle Scholar