Hostname: page-component-cd9895bd7-hc48f Total loading time: 0 Render date: 2024-12-19T13:58:10.671Z Has data issue: false hasContentIssue false

A Survey on Machine Reading Comprehension Systems

Published online by Cambridge University Press:  19 January 2022

Razieh Baradaran
Affiliation:
Computer and Information Technology Department, University of Qom, Qom, Iran
Razieh Ghiasi
Affiliation:
Computer and Information Technology Department, University of Qom, Qom, Iran
Hossein Amirkhani*
Affiliation:
Computer and Information Technology Department, University of Qom, Qom, Iran
*
*Corresponding author. E-mail: [email protected]

Abstract

Machine Reading Comprehension (MRC) is a challenging task and hot topic in Natural Language Processing. The goal of this field is to develop systems for answering the questions regarding a given context. In this paper, we present a comprehensive survey on diverse aspects of MRC systems, including their approaches, structures, input/outputs, and research novelties. We illustrate the recent trends in this field based on a review of 241 papers published during 2016–2020. Our investigation demonstrated that the focus of research has changed in recent years from answer extraction to answer generation, from single- to multi-document reading comprehension, and from learning from scratch to using pre-trained word vectors. Moreover, we discuss the popular datasets and the evaluation metrics in this field. The paper ends with an investigation of the most-cited papers and their contributions.

Type
Survey Paper
Copyright
© The Author(s), 2022. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

**

Equal contribution

References

Aghaebrahimian, A. (2018). Linguistically-Based Deep Unstructured Question Answering. Proceedings of the 22nd Conference on Computational Natural Language Learning, pp. 433-43.CrossRefGoogle Scholar
Akour, M., Abufardeh, S., Magel, K. and Al-Radaideh, Q. (2011). QArabPro: A Rule Based Question Answering System for Reading Comprehension Tests in Arabic. American Journal of Applied Sciences 8, 652661.CrossRefGoogle Scholar
Amirkhani, H., Jafari, M.A., Amirak, A., Pourjafari, Z., Jahromi, S.F. and Kouhkan, Z. (2020). FarsTail: A Persian Natural Language Inference Dataset. arXiv preprint arXiv:2009.08820.Google Scholar
Andor, D., He, L., Lee, K. and Pitler, E. (2019). Giving BERT a Calculator: Finding Operations and Arguments with Reading Comprehension. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 5947-52. Hong Kong, China.CrossRefGoogle Scholar
Angelidis, S., Frermann, L., Marcheggiani, D., Blanco, R. and MÀrquez, L. (2019). Book QA: Stories of Challenges and Opportunities. Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 78-85. Hong Kong, China.CrossRefGoogle Scholar
Anuranjana, K., Rao, V. and Mamidi, R. (2019). HindiRC: A Dataset for Reading Comprehension in Hindi. 0th International Conference on Computational Linguistics and Intelligent Text, La Rochelle, France.Google Scholar
Arivuchelvan, K.M. and Lakahmi, K. (2017). Reading Comprehension System-A Review. Indian Journal of Scientific Research (IJSR) 14, 8390.Google Scholar
Asai, A. and Hajishirzi, H. (2020). Logic-Guided Data Augmentation and Regularization for Consistent Question Answering. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5642–50.CrossRefGoogle Scholar
Back, S., Chinthakindi, S.C., Kedia, A., Lee, H. and Choo, J. (2020). NeurQuRI: Neural Question Requirement Inspector for Answerability Prediction in Machine Reading Comprehension. International Conference on Learning Representations.Google Scholar
Back, S., Yu, S., Indurthi, S.R., Kim, J. and Choo, J. (2018). MemoReader: Large-Scale Reading Comprehension through Neural Memory Controller. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2131-40.CrossRefGoogle Scholar
Bahdanau, D., Cho, K. and Bengio, Y. (2015). Neural machine translation by jointly learning to align and translate. Proceedings of the International Conference on Learning Representations (ICLR).Google Scholar
Bajgar, O., Kadlec, R. and Kleindienst, J. (2017). Embracing data abundance: BookTest Dataset for Reading Comprehension. International Conference on Learning Representations (ICLR) Workshop.Google Scholar
Banerjee, S. and Lavie, A. (2005). METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pp. 65-72.Google Scholar
Bao, H., Dong, L., Wei, F., Wang, W., Yang, N., Cui, L., Piao, S. and Zhou, M. (2019). Inspecting Unification of Encoding and Matching with Transformer: A Case Study of Machine Reading Comprehension. Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 14-18.CrossRefGoogle Scholar
Bauer, L., Wang, Y. and Bansal, M. (2018). Commonsense for Generative Multi-Hop Question Answering Tasks. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4220–30CrossRefGoogle Scholar
Berant, J., Srikumar, V., Chen, P.-C., Vander Linden, A., Harding, B., Huang, B., Clark, P. and Manning, C.D. (2014). Modeling Biological Processes for Reading Comprehension. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1499-510.CrossRefGoogle Scholar
Berzak, Y., Malmaud, J. and Levy, R. (2020). STARC: Structured Annotations for Reading Comprehension. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5726–35.CrossRefGoogle Scholar
Bi, B., Wu, C., Yan, M., Wang, W., Xia, J. and Li, C. (2019). Incorporating External Knowledge into Machine Reading for Generative Question Answering. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2521-30. Hong Kong, China.CrossRefGoogle Scholar
Bouziane, A., Bouchiha, D., Doumi, N. and Malki, M. (2015). Question answering systems: survey and trends. Procedia Computer Science 73, 366375.CrossRefGoogle Scholar
Cao, Y., Fang, M., Yu, B. and Zhou, J.T. (2020). Unsupervised Domain Adaptation on Reading Comprehension. AAAI, pp. 7480-87.CrossRefGoogle Scholar
Charlet, D., Damnati, G., BÉchet, F. and Heinecke, J. (2020). Cross-lingual and cross-domain evaluation of Machine Reading Comprehension with Squad and CALOR-Quest corpora. Proceedings of The 12th Language Resources and Evaluation Conference, pp. 5491-97.Google Scholar
Chaturvedi, A., Pandit, O. and Garain, U. (2018). CNN for Text-Based Multiple Choice Question Answering. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 272-77.CrossRefGoogle Scholar
Chen, A., Stanovsky, G., Singh, S. and Gardner, M. (2019a). Evaluating question answering evaluation. Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 119-24.CrossRefGoogle Scholar
Chen, D. (2018). Neural Reading Comprehension and Beyond. Stanford University.Google Scholar
Chen, D., Bolton, J. and Manning, C.D. (2016). A Thorough Examination of the Cnn/Daily Mail Reading Comprehension Task. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 2358–67.CrossRefGoogle Scholar
Chen, D., Fisch, A., Weston, J. and Bordes, A. (2017). Reading wikipedia to answer open-domain questions. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 1870-79.CrossRefGoogle Scholar
Chen, X., Liang, C., Yu, A.W., Zhou, D., Song, D. and Le, Q.V. (2020). Neural symbolic reader: Scalable integration of distributed and symbolic representations for reading comprehension. International Conference on Learning Representations (ICLR2020).Google Scholar
Chen, Z., Cui, Y., Ma, W., Wang, S. and Hu, G. (2019b). Convolutional spatial attention model for reading comprehension with multiple-choice questions. Proceedings of the AAAI Conference on Artificial Intelligence, pp. 6276-83.CrossRefGoogle Scholar
Cho, K., Van MerriËnboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H. and Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1724–34. Doha, Qatar.CrossRefGoogle Scholar
Choi, E., Hewlett, D., Lacoste, A., Polosukhin, I., Uszkoreit, J. and Berant, J. (2017a). Hierarchical Question Answering for Long Documents. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pp. 209-20. Vancouver, Canada.CrossRefGoogle Scholar
Choi, E., Hewlett, D., Uszkoreit, J., Polosukhin, I., Lacoste, A. and Berant, J. (2017b). Coarse-to-fine question answering for long documents. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 209-20.CrossRefGoogle Scholar
Chollet, F. (2017). Deep learning with python. Manning Publications Co.Google Scholar
Clark, C. and Gardner, M. (2018). Simple and effective multi-paragraph reading comprehension. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers, pp. 845–55CrossRefGoogle Scholar
Collobert, R. and Weston, J. (2008). A unified architecture for natural language processing: Deep neural networks with multitask learning. Proceedings of the 25th international conference on Machine learning, pp. 160-67. ACM.CrossRefGoogle Scholar
Cui, L., Huang, S., Wei, F., Tan, C., Duan, C. and Zhou, M. (2017a). Superagent: a customer service chatbot for e-commerce websites. Proceedings of ACL, System Demonstrations, pp. 97-102. Association for Computational Linguistics.CrossRefGoogle Scholar
Cui, Y., Che, W., Liu, T., Qin, B., Wang, S. and Hu, G. (2019a). Cross-Lingual Machine Reading Comprehension. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 1586-95. Hong Kong, China.CrossRefGoogle Scholar
Cui, Y., Chen, Z., Wei, S., Wang, S., Liu, T. and Hu, G. (2017b). Attention-over-attention neural networks for reading comprehension. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics pp. 593–602.CrossRefGoogle Scholar
Cui, Y., Liu, T., Che, W., Xiao, L., Chen, Z., Ma, W., Wang, S. and Hu, G. (2019b). A Span-Extraction Dataset for Chinese Machine Reading Comprehension. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 5883-89. Hong Kong, China.CrossRefGoogle Scholar
Cui, Y., Liu, T., Chen, Z., Wang, S. and Hu, G. (2016). Consensus Attention-Based Neural Networks for Chinese Reading Comprehension. Proceedings of the COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pp. 1777–86.Google Scholar
Das, R., Dhuliawala, S., Zaheer, M. and McCallum, A. (2019). Multi-step Retriever-Reader Interaction for Scalable Open-domain Question Answering. International Conference on Learning Representations.Google Scholar
Dasigi, P., Liu, N.F., MarasoviĆ, A., Smith, N.A. and Gardner, M. (2019). Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 5925-32. Hong Kong, China.CrossRefGoogle Scholar
Dehghani, M., Azarbonyad, H., Kamps, J. and de Rijke, M. (2019). Learning to transform, combine, and reason in open-domain question answering. Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pp. 681-89.CrossRefGoogle Scholar
Deng, L and Liu, Y. (2018). Deep Learning in Natural Language Processing. Springer Singapore.CrossRefGoogle Scholar
Devlin, J., Chang, M.-W., Lee, K. and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171-86.Google Scholar
Dhingra, B., Liu, H., Yang, Z., Cohen, W.W. and Salakhutdinov, R. (2017). Gated-attention readers for text comprehension. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics pp. 1832–46.CrossRefGoogle Scholar
Dhingra, B., Pruthi, D. and Rajagopal, D. (2018). Simple and Effective Semi-Supervised Question Answering. Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2018), pp. 582–87.CrossRefGoogle Scholar
Dhingra, B., Zhou, Z., Fitzpatrick, D., Muehl, M. and Cohen, W.W. (2016). Tweet2vec: Character-based distributed representations for social media. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 269–74.CrossRefGoogle Scholar
Ding, M., Zhou, C., Chen, Q., Yang, H. and Tang, J. (2019). Cognitive Graph for Multi-Hop Reading Comprehension at Scale. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2694-703. Florence, Italy.CrossRefGoogle Scholar
Du, X. and Cardie, C. (2018). Harvesting Paragraph-Level Question-Answer Pairs from Wikipedia. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pp. 1907–17.Google Scholar
Dua, D., Singh, S. and Gardner, M. (2020). Benefits of Intermediate Annotations in Reading Comprehension. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5627-34.CrossRefGoogle Scholar
Dua, D., Wang, Y., Dasigi, P., Stanovsky, G., Singh, S. and Gardner, M. (2019). DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2368-78. Minneapolis, Minnesota.Google Scholar
Duan, N., Tang, D., Chen, P. and Zhou, M. (2017). Question generation for question answering. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 866-74.CrossRefGoogle Scholar
Dunietz, J., Burnham, G., Bharadwaj, A., Chu-Carroll, J., Rambow, O. and Ferrucci, D. (2020). To Test Machine Comprehension, Start by Defining Comprehension. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7839–59.CrossRefGoogle Scholar
Elgohary, A., Zhao, C. and Boyd-Graber, J. (2018). A dataset and baselines for sequential open-domain question answering. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1077-83.CrossRefGoogle Scholar
Ethayarajh, K. (2019). How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pp. 55–65. Hong Kong, China.CrossRefGoogle Scholar
Fisch, A., Talmor, A., Jia, R., Seo, M., Choi, E. and Chen, D. (2019). MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension. Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 1-13. Hong Kong, China.CrossRefGoogle Scholar
Frermann, L. (2019). Extractive NarrativeQA with Heuristic Pre-Training. Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 172-82.CrossRefGoogle Scholar
Gardner, M., Berant, J., Hajishirzi, H., Talmor, A. and Min, S. (2019). On Making Reading Comprehension More Comprehensive. Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 105-12.CrossRefGoogle Scholar
Ghaeini, R., Fern, X.Z., Shahbazi, H. and Tadepalli, P. (2018). Dependent Gated Reading for Cloze-Style Question Answering. Proceedings of the 27th International Conference on Computational Linguistics pp. 3330–45.Google Scholar
Giuseppe, A. (2017). Question Dependent Recurrent Entity Network for Question Answering. NL4AI: 1st Workshop on Natural Language for Artificial Intelligence, pp. 69-80. CEUR.Google Scholar
Golub, D., Huang, P.-S., He, X. and Deng, L. (2017). Two-stage synthesis networks for transfer learning in machine comprehension. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing pp. 835–44.CrossRefGoogle Scholar
Gong, Y. and Bowman, S.R. (2018). Ruminating reader: Reasoning with gated multi-hop attention. Proceedings of the Workshop on Machine Reading for Question Answering, pp. 1–11 Association for Computational Linguistics.CrossRefGoogle Scholar
Greco, C., Suglia, A., Basile, P., Rossiello, G. and Semeraro, G. (2016). Iterative multi-document neural attention for multiple answer prediction. 2016 AI* IA Workshop on Deep Understanding and Reasoning: A Challenge for Next-Generation Intelligent Agents (URANIA).Google Scholar
Guo, S., Li, R., Tan, H., Li, X., Guan, Y., Zhao, H. and Zhang, Y. (2020). A Frame-based Sentence Representation for Machine Reading Comprehension. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 891-96.CrossRefGoogle Scholar
Gupta, D., Lenka, P., Ekbal, A. and Bhattacharyya, P. (2018). Uncovering Code-Mixed Challenges: A Framework for Linguistically Driven Question Generation and Neural based Question Answering. Proceedings of the 22nd Conference on Computational Natural Language Learning, pp. 119-30.CrossRefGoogle Scholar
Gupta, M., Kulkarni, N., Chanda, R., Rayasam, A. and Lipton, Z.C. (2019). AmazonQA: A Review-Based Question Answering Task. Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19), pp. 4996-5002.CrossRefGoogle Scholar
Gupta, S. and Khade, N. (2020). BERT Based Multilingual Machine Comprehension in English and Hindi. ACM Trans. Asian Low-Resour. Lang. Inf. Process.s 19.Google Scholar
Gupta, S., Rawat, B.P.S. and Yu, H. (2020). Conversational Machine Comprehension: a Literature Review. Proceedings of the 28th International Conference on Computational Linguistics, pp. 2739-53.CrossRefGoogle Scholar
Hardalov, M., Koychev, I. and Nakov, P. (2019). Beyond English-Only Reading Comprehension: Experiments in Zero-Shot Multilingual Transfer for Bulgarian. Proceedings of Recent Advances in Natural Language Processing, pp. 447-59. Varna, Bulgaria.CrossRefGoogle Scholar
Hashemi, H., Aliannejadi, M., Zamani, H. and Croft, W.B. (2020). ANTIQUE: A non-factoid question answering benchmark. European Conference on Information Retrieval, pp. 166-73. Springer.CrossRefGoogle Scholar
He, W., Liu, K., Liu, J., Lyu, Y., Zhao, S., Xiao, X., Liu, Y., Wang, Y., Wu, H. and She, Q. (2018). Dureader: a Chinese Machine Reading Comprehension Dataset from Real-World Applications. Proceedings of the Workshop on Machine Reading for Question Answering, pp. 37–46. Association for Computational Linguistics.CrossRefGoogle Scholar
Hermann, K.M., Kocisky, T., Grefenstette, E., Espeholt, L., Kay, W., Suleyman, M. and Blunsom, P. (2015). Teaching machines to read and comprehend. Advances in neural information processing systems, pp. 1693-701.Google Scholar
Hewlett, D., Jones, L. and Lacoste, A. (2017). Accurate supervised and semi-supervised machine reading for long documents. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2011-20.Google Scholar
Hill, F., Bordes, A., Chopra, S. and Weston, J. (2016). The goldilocks principle: Reading children’s books with explicit memory representations. Proceedings of the International Conference on Learning Representations (ICLR).Google Scholar
Hirschman, L., Light, M., Breck, E. and Burger, J.D. (1999). Deep Read: A Reading Comprehension System. Proceedings of the 37th annual meeting of the Association for Computational Linguistics, pp. 325-32.Google Scholar
Hoang, L., Wiseman, S. and Rush, A.M. (2018). Entity Tracking Improves Cloze-style Reading Comprehension. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1049–55CrossRefGoogle Scholar
Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural Computation 9, 17351780.CrossRefGoogle ScholarPubMed
Horbach, A., Aldabe, I., Bexte, M., de Lacalle, O.L. and Maritxalar, M. (2020). Linguistic Appropriateness and Pedagogic Usefulness of Reading Comprehension Questions. Proceedings of The 12th Language Resources and Evaluation Conference, pp. 1753-62.Google Scholar
Htut, P.M., Bowman, S.R. and Cho, K. (2018). Training a Ranking Function for Open-Domain Question Answering. Proceedings of NAACL-HLT 2018: Student Research Workshop, pp. 120–27.Google Scholar
Hu, M., Peng, Y., Huang, Z. and Li, D. (2019a). A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 1596-606. Hong Kong, China.CrossRefGoogle Scholar
Hu, M., Peng, Y., Huang, Z. and Li, D. (2019b). Retrieve, Read, Rerank: Towards End-to-End Multi-Document Reading Comprehension. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy.CrossRefGoogle Scholar
Hu, M., Peng, Y., Huang, Z., Qiu, X., Wei, F. and Zhou, M. (2018a). Reinforced Mnemonic Reader for Machine Reading Comprehension. Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18), pp. 4099-106.Google Scholar
Hu, M., Peng, Y., Wei, F., Huang, Z., Li, D., Yang, N. and Zhou, M. (2018b). Attention-Guided Answer Distillation for Machine Reading Comprehension. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4243–52.CrossRefGoogle Scholar
Hu, M., Wei, F., Peng, Y., Huang, Z., Yang, N. and Li, D. (2019c). Read+ Verify: Machine Reading Comprehension with Unanswerable Questions. Proceedings of the AAAI Conference on Artificial Intelligence, pp. 6529-37.CrossRefGoogle Scholar
Huang, H.-Y., Zhu, C., Shen, Y. and Chen, W. (2018). Fusionnet: Fusing via fully-aware attention with application to machine comprehension. Proceedings of the International Conference on Learning Representations (ICLR).Google Scholar
Huang, K., Tang, Y., Huang, J., He, X. and Zhou, B. (2019a). Relation Module for Non-Answerable Predictions on Reading Comprehension. Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pp. 747-56.CrossRefGoogle Scholar
Huang, L., Le Bras, R., Bhagavatula, C. and Choi, Y. (2019b). Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2391-401. Hong Kong, China.CrossRefGoogle Scholar
Indurthi, S., Yu, S., Back, S. and CuayÁhuitl, H. (2018). Cut to the Chase: A Context Zoom-in Network for Reading Comprehension. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 570–75. Association for Computational Linguistics.CrossRefGoogle Scholar
Ingale, V. and Singh, P. (2019). Datasets for Machine Reading Comprehension: A Literature Review. Available at SSRN 3454037.CrossRefGoogle Scholar
Inoue, N., Stenetorp, P. and Inui, K. (2020). R4C: A Benchmark for Evaluating RC Systems to Get the Right Answer for the Right Reason. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 6740-50.CrossRefGoogle Scholar
Jia, R. and Liang, P. (2017). Adversarial examples for evaluating reading comprehension systems. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing pp. 2021–31.CrossRefGoogle Scholar
Jiang, Y., Joshi, N., Chen, Y.-C. and Bansal, M. (2019). Explore, Propose, and Assemble: An Interpretable Model for Multi-Hop Reading Comprehension. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2714–25. Florence, Italy.CrossRefGoogle Scholar
Jin, D., Gao, S., Kao, J.-Y., Chung, T. and Hakkani-tur, D. (2020). Mmm: Multi-stage multi-task learning for multi-choice reading comprehension. The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20).CrossRefGoogle Scholar
Jin, W., Yang, G. and Zhu, H. (2019). An Efficient Machine Reading Comprehension Method Based on Attention Mechanism. 2019 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom), pp. 1297-302. IEEE.CrossRefGoogle Scholar
Jing, Y., Xiong, D. and Zhen, Y. (2019). BiPaR: A Bilingual Parallel Dataset for Multilingual and Cross-lingual Reading Comprehension on Novels. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pp. 2452–62. Hong Kong, China.CrossRefGoogle Scholar
Joshi, M., Choi, E., Weld, D.S. and Zettlemoyer, L. (2017). Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1601-11.CrossRefGoogle Scholar
Jurafsky, D. and Martin, J.H. (2019). Speech and language processing. Pearson London.Google Scholar
Kadlec, R., Bajgar, O. and Kleindienst, J. (2016a). From Particular to General: A Preliminary Case Study of Transfer Learning in Reading Comprehension. NIPS Machine Intelligence Workshop.Google Scholar
Kadlec, R., Schmid, M., Bajgar, O. and Kleindienst, J. (2016b). Text understanding with the attention sum reader network. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics pp. 908–18.CrossRefGoogle Scholar
Ke, N.R., Zolna, K., Sordoni, A., Lin, Z., Trischler, A., Bengio, Y., Pineau, J., Charlin, L. and Pal, C. (2018). Focused Hierarchical RNNs for Conditional Sequence Processing. Proceedings of the 35 th International Conference on Machine Learning. Google Scholar
Khashabi, D., Chaturvedi, S., Roth, M., Upadhyay, S. and Roth, D. (2018). Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 252-62.Google Scholar
Kim, Y. (2014). Convolutional neural networks for sentence classification. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1746-51. Doha, Qatar.CrossRefGoogle Scholar
Kim, Y., Jernite, Y., Sontag, D. and Rush, A.M. (2016). Character-aware neural language models. Thirtieth AAAI Conference on Artificial Intelligence.CrossRefGoogle Scholar
Kobayashi, S., Tian, R., Okazaki, N. and Inui, K. (2016). Dynamic entity representation with max-pooling improves machine reading. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 850-55.CrossRefGoogle Scholar
KoČiskÝ, T., Schwarz, J., Blunsom, P., Dyer, C., Hermann, K.M., Melis, G. and Grefenstette, E. (2018). The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics 6, 317328.CrossRefGoogle Scholar
Kodra, L. and Meçe, E.K. (2017). Question Answering Systems: A Review on Present Developments, Challenges and Trends. INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE 8, 217224.Google Scholar
Kundu, S. and Ng, H.T. (2018a). A Nil-Aware Answer Extraction Framework for Question Answering. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4243-52.CrossRefGoogle Scholar
Kundu, S. and Ng, H.T. (2018b). A Question-Focused Multi-Factor Attention Network for Question Answering. Association for the Advancement of Artificial Intelligence (AAAI2018).Google Scholar
Kwiatkowski, T., Palomaki, J., Redfield, O., Collins, M., Parikh, A., Alberti, C., Epstein, D., Polosukhin, I., Devlin, J. and Lee, K. (2019). Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics 7, 453466.Google Scholar
Lai, G., Xie, Q., Liu, H., Yang, Y. and Hovy, E. (2017). Race: Large-scale reading comprehension dataset from examinations. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 785–94CrossRefGoogle Scholar
LeCun, Y., Bottou, L., Bengio, Y. and Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 22782324.CrossRefGoogle Scholar
Lee, G., Hwang, S.-w. and Cho, H. (2020). SQuAD2-CR: Semi-supervised Annotation for Cause and Rationales for Unanswerability in SQuAD 2.0. Proceedings of The 12th Language Resources and Evaluation Conference, pp. 5425-32.Google Scholar
Lee, H.-g. and Kim, H. (2020). GF-Net: Improving Machine Reading Comprehension with Feature Gates. Pattern Recognition Letters 129, 815.CrossRefGoogle Scholar
Lee, K., Park, S., Han, H., Yeo, J., Hwang, S.-w. and Lee, J. (2019a). Learning with limited data for multilingual reading comprehension. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2833-43.Google Scholar
Lee, S., Kim, D. and Park, J. (2019b). Domain-agnostic Question-Answering with Adversarial Training. Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 196-202. Hong Kong, China.Google Scholar
Lehnert, W.G. (1977). The process of question answering. Yale Univ New Haven Conn Dept Of Computer Science.Google Scholar
Li, H., Zhang, X., Liu, Y., Zhang, Y., Wang, Q., Zhou, X., Liu, J., Wu, H. and Wang, H. (2019a). D-NET: A Simple Framework for Improving the Generalization of Machine Reading Comprehension. Proceedings of 2nd Machine Reading for Reading Comprehension Workshop at EMNLP.Google Scholar
Li, J., Li, B. and Lv, X. (2018). Machine Reading Comprehension Based on the Combination of BIDAF Model and Word Vectors. Proceedings of the 2nd International Conference on Computer Science and Application Engineering, pp. 89. ACM.CrossRefGoogle Scholar
Li, X., Zhang, Z., Zhu, W., Li, Z., Ni, Y., Gao, P., Yan, J. and Xie, G. (2019b). Pingan Smart Health and SJTU at COIN-Shared Task: utilizing Pre-trained Language Models and Common-sense Knowledge in Machine Reading Tasks. Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing, pp. 93-98.Google Scholar
Li, Y., Li, H. and Liu, J. (2019c). Towards Robust Neural Machine Reading Comprehension via Question Paraphrases. 2019 International Conference on Asian Language Processing (IALP), pp. 290-95. IEEE.Google Scholar
Liang, Y., Li, J. and Yin, J. (2019). A New Multi-Choice Reading Comprehension Dataset for Curriculum Learning. Asian Conference on Machine Learning, pp. 742-57.Google Scholar
Lin, C.-Y. (2004). Rouge: A package for automatic evaluation of summaries. Proceedings of the Workshop on Text Summarization Branches Out (WAS 2004): Association for Computational Linguistics.Google Scholar
Lin, K., Tafjord, O., Clark, P. and Gardner, M. (2019). Reasoning Over Paragraph Effects in Situations. Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 58-62. Hong Kong, China.CrossRefGoogle Scholar
Lin, X., Liu, R. and Li, Y. (2018). An Option Gate Module for Sentence Inference on Machine Reading Comprehension. Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pp. 1743-46. ACM.CrossRefGoogle Scholar
Liu, C., Zhao, Y., Si, Q., Zhang, H., Li, B. and Yu, D. (2018a). Multi-Perspective Fusion Network for Commonsense Reading Comprehension. Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data, pp. 262-74. Springer.CrossRefGoogle Scholar
Liu, D., Gong, Y., Fu, J., Yan, Y., Chen, J., Jiang, D., Lv, J. and Duan, N. (2020a). RikiNet: Reading Wikipedia Pages for Natural Question Answering. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 6762–71.CrossRefGoogle Scholar
Liu, F. and Perez, J. (2017). Gated end-to-end memory networks. Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pp. 1-10.CrossRefGoogle Scholar
Liu, J., Lin, Y., Liu, Z. and Sun, M. (2019a). XQA: A cross-lingual open-domain question answering dataset. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2358-68.CrossRefGoogle Scholar
Liu, J., Wei, W., Sun, M., Chen, H., Du, Y. and Lin, D. (2018b). A Multi-answer Multi-task Framework for Real-world Machine Reading Comprehension. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2109-18.CrossRefGoogle Scholar
Liu, K., Liu, X., Yang, A., Liu, J., Su, J., Li, S. and She, Q. (2020b). A Robust Adversarial Training Approach to Machine Reading Comprehension. AAAI, pp. 8392-400.CrossRefGoogle Scholar
Liu, R., Hu, J., Wei, W., Yang, Z. and Nyberg, E. (2017). Structural embedding of syntactic trees for machine comprehension. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing pp. 815–24CrossRefGoogle Scholar
Liu, S., Zhang, S., Zhang, X. and Wang, H. (2019b). R-trans: RNN Transformer Network for Chinese Machine Reading Comprehension. IEEE Access 7, 2773627745.Google Scholar
Liu, S., Zhang, X., Zhang, S., Wang, H. and Zhang, W. (2019c). Review Neural Machine Reading Comprehension: Methods and Trends. Applied Sciences 9, 3698.Google Scholar
Liu, X., Shen, Y., Duh, K. and Gao, J. (2018c). Stochastic Answer Networks for Machine Reading Comprehension. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers): 1694–704CrossRefGoogle Scholar
Liu, Y., Huang, Z., Hu, M., Du, S., Peng, Y., Li, D. and Wang, X. (2018d). MFM: A Multi-level Fused Sequence Matching Model for Candidates Filtering in Multi-paragraphs Question-Answering. Pacific Rim Conference on Multimedia, pp. 449-58. Springer.CrossRefGoogle Scholar
Longpre, S., Lu, Y., Tu, Z. and DuBois, C. (2019). An Exploration of Data Augmentation and Sampling Techniques for Domain-Agnostic Question Answering. Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 220-27.CrossRefGoogle Scholar
Ma, K., Jurczyk, T. and Choi, J.D. (2018). Challenging Reading Comprehension on Daily Conversation: Passage Completion on Multiparty Dialog. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 2039-48.CrossRefGoogle Scholar
Manning, C.D., Raghavan, P. and Schütze, H. (2008). Chapter 8: Evaluation in information retrieval. Introduction to information retrieval: Cambridge University Press.Google Scholar
Miao, H., Liu, R. and Gao, S. (2019a). A Multiple Granularity Co-Reasoning Model for Multi-choice Reading Comprehension. 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1-7. IEEE.Google Scholar
Miao, H., Liu, R. and Gao, S. (2019b). Option Attentive Capsule Network for Multi-choice Reading Comprehension. International Conference on Neural Information Processing, pp. 306-18. Springer.CrossRefGoogle Scholar
Mihaylov, T. and Frank, A. (2018). Knowledgeable Reader: Enhancing Cloze-Style Reading Comprehension with External Commonsense Knowledge. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pp. 821–32.CrossRefGoogle Scholar
Mihaylov, T. and Frank, A. (2019). Discourse-Aware Semantic Self-Attention for Narrative Reading Comprehension. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2541-52. Hong Kong, China.CrossRefGoogle Scholar
Mihaylov, T., Kozareva, Z. and Frank, A. (2017). Neural Skill Transfer from Supervised Language Tasks to Reading Comprehension. Workshop on Learning with Limited Labeled Data: Weak Supervision and Beyond at NIPS.Google Scholar
Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S. and Dean, J. (2013). Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, pp. 3111-19.Google Scholar
Min, S., Seo, M. and Hajishirzi, H. (2017). Question answering through transfer learning from large fine-grained supervision data. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Short Papers) pp. 510–17.CrossRefGoogle Scholar
Min, S., Zhong, V., Socher, R. and Xiong, C. (2018). Efficient and Robust Question Answering from Minimal Context over Documents. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pp. 1725–35.CrossRefGoogle Scholar
Min, S., Zhong, V., Zettlemoyer, L. and Hajishirzi, H. (2019). Multi-hop Reading Comprehension through Question Decomposition and Rescoring. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 6097-109. Florence, Italy.CrossRefGoogle Scholar
Mozannar, H., Maamary, E., El Hajal, K. and Hajj, H. (2019). Neural Arabic Question Answering. Proceedings of the Fourth Arabic Natural Language Processing Workshop, pp. 108-18. Florence, Italy.CrossRefGoogle Scholar
Munkhdalai, T. and Yu, H. (2017). Reasoning with memory augmented neural networks for language comprehension. Proceedings of the International Conference on Learning Representations (ICLR).Google Scholar
Nakatsuji, M. and Okui, S. (2020). Answer Generation through Unified Memories over Multiple Passages. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20).CrossRefGoogle Scholar
Ng, H.T., Teo, L.H. and Kwan, J.L.P. (2000). A Machine Learning Approach to Answering Questions for Reading Comprehension Tests. Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics, pp. 124-32.CrossRefGoogle Scholar
Nguyen, T., Rosenberg, M., Song, X., Gao, J., Tiwary, S., Majumder, R. and Deng, L. (2016). MS MARCO: A Human Generated Machine Reading Comprehension Dataset. Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 co-located with the 30th Annual Conference on Neural Information Processing Systems (CoCo@NIPS), Barcelona, Spain.Google Scholar
Nie, Y., Wang, S. and Bansal, M. (2019). Revealing the Importance of Semantic Retrieval for Machine Reading at Scale. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2553-66. Hong Kong, China.CrossRefGoogle Scholar
Nishida, K., Nishida, K., Nagata, M., Otsuka, A., Saito, I., Asano, H. and Tomita, J. (2019a). Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2335-45. Florence, Italy.CrossRefGoogle Scholar
Nishida, K., Nishida, K., Saito, I., Asano, H. and Tomita, J. (2020). Unsupervised Domain Adaptation of Language Models for Reading Comprehension. Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), pp. 5392–99.Google Scholar
Nishida, K., Saito, I., Nishida, K., Shinoda, K., Otsuka, A., Asano, H. and Tomita, J. (2019b). Multi-style Generative Reading Comprehension. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2273-84. Florence, Italy.CrossRefGoogle Scholar
Nishida, K., Saito, I., Otsuka, A., Asano, H. and Tomita, J. (2018). Retrieve-and-read: Multi-task learning of information retrieval and reading comprehension. Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pp. 647-56. ACM.CrossRefGoogle Scholar
Niu, Y., Jiao, F., Zhou, M., Yao, T., Xu, J. and Huang, M. (2020). A Self-Training Method for Machine Reading Comprehension with Soft Evidence Extraction. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 3916–27.Google Scholar
Onishi, T., Wang, H., Bansal, M., Gimpel, K. and McAllester, D. (2016). Who did what: A large-scale person-centered cloze dataset. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2230–35.CrossRefGoogle Scholar
Osama, R., El-Makky, N.M. and Torki, M. (2019). Question Answering Using Hierarchical Attention on Top of BERT Features. Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 191-95.CrossRefGoogle Scholar
Ostermann, S., Modi, A., Roth, M., Thater, S. and Pinkal, M. (2018). MCScript: A Novel Dataset for Assessing Machine Comprehension Using Script Knowledge. Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC), Miyazaki, Japan.Google Scholar
Ostermann, S., Roth, M. and Pinkal, M. (2019). MCScript2.0: A Machine Comprehension Corpus Focused on Script Events and Participants. Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pp. 103-17. Minneapolis, Minnesota.Google Scholar
Pampari, A., Raghavan, P., Liang, J. and Peng, J. (2018). emrQA: A Large Corpus for Question Answering on Electronic Medical Records. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2357–68CrossRefGoogle Scholar
Pang, L., Lan, Y., Guo, J., Xu, J., Su, L. and Cheng, X. (2019). Has-qa: Hierarchical answer spans model for open-domain question answering. Proceedings of the AAAI Conference on Artificial Intelligence, pp. 6875-82.CrossRefGoogle Scholar
Papineni, K., Roukos, S., Ward, T. and Zhu, W.-J. (2002). BLEU: a method for automatic evaluation of machine translation. Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311-18. Association for Computational Linguistics.Google Scholar
Pappas, D., Stavropoulos, P., Androutsopoulos, I. and McDonald, R. (2020). BioMRC: A Dataset for Biomedical Machine Reading Comprehension. Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing, pp. 140-49.CrossRefGoogle Scholar
Park, C., Lee, C. and Song, H. (2019). VS3-NET: Neural Variational Inference Model for Machine-Reading Comprehension. ETRI Journal 41, 771781.CrossRefGoogle Scholar
Pennington, J., Socher, R. and Manning, C. (2014). Glove: Global vectors for word representation. Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532-43.CrossRefGoogle Scholar
Peters, M.E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K. and Zettlemoyer, L. (2018). Deep contextualized word representations. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pp. 2227-37. New Orleans, Louisiana.CrossRefGoogle Scholar
Prakash, T., Tripathy, B.K. and Banu, K.S. (2018). ALICE: A Natural Language Question Answering System Using Dynamic Attention and Memory. International Conference on Soft Computing Systems, pp. 274-82. Springer.CrossRefGoogle Scholar
Pugaliya, H., Route, J., Ma, K., Geng, Y. and Nyberg, E. (2019). Bend but Don’t Break? Multi-Challenge Stress Test for QA Models. Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 125-36.CrossRefGoogle Scholar
Qiu, D., Zhang, Y., Feng, X., Liao, X., Jiang, W., Lyu, Y., Liu, K. and Zhao, J. (2019). Machine Reading Comprehension Using Structural Knowledge Graph-aware Network. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 5898-903.CrossRefGoogle Scholar
Radford, A., Narasimhan, K., Salimans, T. and Sutskever, I. (2018). Improving language understanding by generative pre-training.Google Scholar
Rajpurkar, P., Jia, R. and Liang, P. (2018). Know What You Don’t Know: Unanswerable Questions for SQuAD. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Short Papers), pp. 784–89.CrossRefGoogle Scholar
Rajpurkar, P., Zhang, J., Lopyrev, K. and Liang, P. (2016). Squad: 100,000+ questions for machine comprehension of text. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383–92.Google Scholar
Ran, Q., Lin, Y., Li, P., Zhou, J. and Liu, Z. (2019). NumNet: Machine Reading Comprehension with Numerical Reasoning. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2474-84. Hong Kong, China.CrossRefGoogle Scholar
Reddy, S., Chen, D. and Manning, C.D. (2019). Coqa: A conversational question answering challenge. Transactions of the Association for Computational Linguistics 7, 249266.Google Scholar
Ren, M., Huang, H., Wei, R., Liu, H., Bai, Y., Wang, Y. and Gao, Y. (2019). Multiple Perspective Answer Reranking for Multi-passage Reading Comprehension. CCF International Conference on Natural Language Processing and Chinese Computing, pp. 736-47. Springer.CrossRefGoogle Scholar
Ren, Q., Cheng, X. and Su, S. (2020). Multi-Task Learning with Generative Adversarial Training for Multi-Passage Machine Reading Comprehension. AAAI, pp. 8705-12.CrossRefGoogle Scholar
Richardson, M., Burges, C.J.C and Renshaw, E. (2013). Mctest: A challenge dataset for the open-domain machine comprehension of text. Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 193-203.Google Scholar
Riloff, E. and Thelen, M. (2000). A Rule-Based Question Answering System for Reading Comprehension Tests. Proceedings of the 2000 ANLP/NAACL Workshop on Reading comprehension tests as evaluation for computer-based language understanding sytems, pp. 13-19. Association for Computational Linguistics.Google Scholar
Ruder, S. (2019). Neural Transfer Learning for Natural Language Processing. NATIONAL UNIVERSITY OF IRELAND, GALWAY.Google Scholar
Sachan, M. and Xing, E. (2018). Self-Training for Jointly Learning to Ask and Answer Questions. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 629-40.CrossRefGoogle Scholar
Saha, A., Aralikatte, R., Khapra, M.M. and Sankaranarayanan, K. (2018). DuoRC: Towards Complex Language Understanding with Paraphrased Reading Comprehension. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pp. 1683–93.CrossRefGoogle Scholar
Salant, S. and Berant, J. (2018). Contextualized Word Representations for Reading Comprehension. Proceedings of NAACL-HLT 2018, pp. 554–59.Google Scholar
Sayama, H.F., Araujo, A.V. and Fernandes, E.R. (2019). FaQuAD: Reading Comprehension Dataset in the Domain of Brazilian Higher Education. 2019 8th Brazilian Conference on Intelligent Systems (BRACIS), pp. 443-48. IEEE.Google Scholar
Schlegel, V., Valentino, M., Freitas, A., Nenadic, G. and Batista-Navarro, R. (2020). A framework for evaluation of Machine Reading Comprehension Gold Standards. Proceedings of the 12th Conference on Language Resources and Evaluation, pp. 5359–69.Google Scholar
Seo, M., Kembhavi, A., Farhadi, A. and Hajishirzi, H. (2017). Bidirectional attention flow for machine comprehension. Proceedings of the 5th International Conference on Learning Representations (ICLR).Google Scholar
Seo, M., Kwiatkowski, T., Parikh, A.P., Farhadi, A. and Hajishirzi, H. (2018). Phrase-Indexed Question Answering: A New Challenge for Scalable Document Comprehension. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing pp. 559–64.CrossRefGoogle Scholar
Shao, C.C., Liu, T., Lai, Y., Tseng, Y. and Tsai, S. (2018). DRCD: a Chinese Machine Reading Comprehension Dataset. Proceedings of the Workshop on Machine Reading for Question Answering, pp. pages 37–46. Association for Computational Linguistics.Google Scholar
Sharma, P. and Roychowdhury, S. (2019). IIT-KGP at COIN 2019: Using pre-trained Language Models for modeling Machine Comprehension. Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing, pp. 80-84.Google Scholar
Shen, Y., Huang, P.-S., Gao, J. and Chen, W. (2017). Reasonet: Learning to Stop Reading in Machine Comprehension. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2017), pp. 1047-55. ACM.CrossRefGoogle Scholar
Sheng, Y., Lan, M. and Wu, Y. (2018). ECNU at SemEval-2018 Task 11: Using Deep Learning Method to Address Machine Comprehension Task. Proceedings of The 12th International Workshop on Semantic Evaluation, pp. 1048-52. Association for Computational Linguistics.CrossRefGoogle Scholar
Song, J., Tang, S., Qian, T., Zhu, W. and Wu, F. (2018). Reading Document and Answering Question via Global Attentional Inference. Pacific Rim Conference on Multimedia (PCM 2018), pp. 335-45. Springer.CrossRefGoogle Scholar
Song, L., Wang, Z., Yu, M., Zhang, Y., Florian, R. and Gildea, D. (2020). Evidence Integration for Multi-hop Reading Comprehension with Graph Neural Networks. IEEE Transactions on knowledge data engineering.Google Scholar
Soni, S. and Roberts, K. (2020). Evaluation of Dataset Selection for Pre-Training and Fine-Tuning Transformer Language Models for Clinical Question Answering. Proceedings of The 12th Language Resources and Evaluation Conference, pp. 5532-38.Google Scholar
Srivastava, R.K., Greff, K. and Schmidhuber, J. (2015). Training very deep networks. Advances in neural information processing systems, pp. 2377-85.Google Scholar
Su, D., Xu, Y., Winata, G.I., Xu, P., Kim, H., Liu, Z. and Fung, P. (2019). Generalizing Question Answering System with Pre-trained Language Model Fine-tuning. Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 203-11.CrossRefGoogle Scholar
Sugawara, S., Inui, K., Sekine, S. and Aizawa, A. (2018). What Makes Reading Comprehension Questions Easier? Proceedings of 2018 conference on empirical methods in natural language processing, pp. 4208-40219.Google Scholar
Sugawara, S., Kido, Y., Yokono, H. and Aizawa, A. (2017). Evaluation Metrics for Machine Reading Comprehension: Prerequisite Skills and Readability. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 806-17.CrossRefGoogle Scholar
Sugawara, S., Stenetorp, P., Inui, K. and Aizawa, A. (2020). Assessing the Benchmarking Capacity of Machine Reading Comprehension Datasets. AAAI, pp. 8918-27.CrossRefGoogle Scholar
Sun, K., Yu, D., Yu, D. and Cardie, C. (2019). Improving Machine Reading Comprehension with General Reading Strategies. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 2633-43.CrossRefGoogle Scholar
Sun, K., Yu, D., Yu, D. and Cardie, C. (2020). Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension. Transactions of the Association for Computational Linguistics 8, 141155.CrossRefGoogle Scholar
Šuster, S. and Daelemans, W. (2018). CliCR: A Dataset of Clinical Case Reports for Machine Reading Comprehension. Proceedings of NAACL-HLT 2018 pp. 1551–63.Google Scholar
Swayamdipta, S., Parikh, A.P. and Kwiatkowski, T. (2018). Multi-mention learning for reading comprehension with neural cascades. Proceeding of the International Conference on Learning Representations (ICLR).Google Scholar
Takahashi, T., Taniguchi, M., Taniguchi, T. and Ohkuma, T. (2019). CLER: Cross-task learning with expert representation to generalize reading and understanding. Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 183-90.Google Scholar
Talmor, A. and Berant, J. (2019). MultiQA: An Empirical Investigation of Generalization and Transfer in Reading Comprehension. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4911-21. Florence, Italy.Google Scholar
Tan, C., Wei, F., Yang, N., Du, B., Lv, W. and Zhou, M. (2018a). S-Net: From Answer Extraction to Answer Synthesis for Machine Reading Comprehension. Association for the Advancement of Artificial Intelligence (AAAI).Google Scholar
Tan, C., Wei, F., Zhou, Q., Yang, N., Lv, W. and Zhou, M. (2018b). I Know There Is No Answer: Modeling Answer Validation for Machine Reading Comprehension. CCF International Conference on Natural Language Processing and Chinese Computing, pp. 85-97. Springer.CrossRefGoogle Scholar
Tang, H., Hong, Y., Chen, X., Wu, K. and Zhang, M. (2019a). How to Answer Comparison Questions. 2019 International Conference on Asian Language Processing (IALP), pp. 331-36. IEEE.Google Scholar
Tang, M., Cai, J. and Zhuo, H.H. (2019b). Multi-matching network for multiple choice reading comprehension. Proceedings of the AAAI Conference on Artificial Intelligence, pp. 7088-95.Google Scholar
Tay, Y., Luu, A.T. and Hui, S.C. (2018). Multi-Granular Sequence Encoding via Dilated Compositional Units for Reading Comprehension. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2141-51.CrossRefGoogle Scholar
Tay, Y., Wang, S., Luu, A.T., Fu, J., Phan, M.C., Yuan, X., Rao, J., Hui, S.C. and Zhang, A. (2019). Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4922-31. Florence, Italy.CrossRefGoogle Scholar
Trischler, A., Wang, T., Yuan, X., Harris, J., Sordoni, A., Bachman, P. and Suleman, K. (2017). Newsqa: A machine comprehension dataset. Proceedings of the 2nd Workshop on Representation Learning for NLP.CrossRefGoogle Scholar
Tu, M., Huang, K., Wang, G., Huang, J., He, X. and Zhou, B. (2020). Select, Answer and Explain: Interpretable Multi-Hop Reading Comprehension over Multiple Documents. AAAI, pp. 9073-80.CrossRefGoogle Scholar
Tu, M., Wang, G., Huang, J., Tang, Y., He, X. and Zhou, B. (2019). Multi-hop Reading Comprehension across Multiple Documents by Reasoning over Heterogeneous Graphs. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2704-13. Florence, Italy.CrossRefGoogle Scholar
Turpin, A. and Scholer, F. (2006). User performance versus precision measures for simple search tasks. Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 11-18. ACM.CrossRefGoogle Scholar
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł. and Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, pp. 5998-6008.Google Scholar
Vedantam, R., Lawrence Zitnick, C. and Parikh, D. (2015). Cider: Consensus-based image description evaluation. Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4566-75.CrossRefGoogle Scholar
Wang, B., Guo, S., Liu, K., He, S. and Zhao, J. (2016). Employing External Rich Knowledge for Machine Comprehension. International Joint Conferences on Artificial Intelligence (IJCAI), pp. 2929-25.Google Scholar
Wang, B., Yao, T., Zhang, Q., Xu, J. and Wang, X. (2020b). ReCO: A Large Scale Chinese Reading Comprehension Dataset on Opinion. AAAI, pp. 9146-53.Google Scholar
Wang, B., Yao, T., Zhang, Q., Xu, J., Liu, K., Tian, Z. and Zhao, J. (2019a). Unsupervised Story Comprehension with Hierarchical Encoder-Decoder. Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval, pp. 93-100.CrossRefGoogle Scholar
Wang, B., Zhang, X., Zhou, X. and Li, J. (2020a). A Gated Dilated Convolution with Attention Model for Clinical Cloze-Style Reading Comprehension. International Journal of Environmental Research Public Health 17, 1323.CrossRefGoogle ScholarPubMed
Wang, C. and Jiang, H. (2019). Explicit Utilization of General Knowledge in Machine Reading Comprehension. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2263–72. Florence, Italy.Google Scholar
Wang, H., Gan, Z., Liu, X., Liu, J., Gao, J. and Wang, H. (2019e). Adversarial Domain Adaptation for Machine Reading Comprehension. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2510-20. Hong Kong, China.CrossRefGoogle Scholar
Wang, H., Lu, W. and Tang, Z. (2019d). Incorporating External Knowledge to Boost Machine Comprehension Based Question Answering. European Conference on Information Retrieval, pp. 819-27. Springer.Google Scholar
Wang, H., Yu, D., Sun, K., Chen, J., Yu, D., McAllester, D. and Roth, D. (2019b). Evidence Sentence Extraction for Machine Reading Comprehension. Proceedings of the 23rd Conference on Computational Natural Language Learning, pp. 696–707. Hong Kong, China.Google Scholar
Wang, H., Yu, M., Guo, X., Das, R., Xiong, W. and Gao, T. (2019c). Do Multi-hop Readers Dream of Reasoning Chains? Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 91-97. Hong Kong, China.CrossRefGoogle Scholar
Wang, S. and Jiang, J. (2017). Machine comprehension using match-lstm and answer pointer. Proceedings of the International Conference on Learning Representations (ICLR), pp. 1-15. Toulon, France.Google Scholar
Wang, S., Yu, M., Chang, S. and Jiang, J. (2018a). A Co-Matching Model for Multi-choice Reading Comprehension. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Short Papers): 746–51.CrossRefGoogle Scholar
Wang, S., Yu, M., Guo, X., Wang, Z., Klinger, T., Zhang, W., Chang, S., Tesauro, G., Zhou, B. and Jiang, J. (2018b). R3: Reinforced ranker-reader for open-domain question answering. Association for the Advancement of Artificial Intelligence (AAAI 2018).Google Scholar
Wang, S., Yu, M., Jiang, J., Zhang, W., Guo, X., Chang, S., Wang, Z., Klinger, T., Tesauro, G. and Campbell, M. (2018c). Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering. Proceedings of the International Conference on Learning Representations (ICLR).Google Scholar
Wang, T., Yuan, X. and Trischler, A. (2017a). A joint model for question answering and question generation. Learning to generate natural language workshop, ICML 2017.Google Scholar
Wang, W., Yan, M. and Wu, C. (2018d). Multi-granularity hierarchical attention fusion networks for reading comprehension and question answering. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1705-14.CrossRefGoogle Scholar
Wang, W., Yang, N., Wei, F., Chang, B. and Zhou, M. (2017b). Gated self-matching networks for reading comprehension and question answering. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 189-98.CrossRefGoogle Scholar
Wang, Y. and Bansal, M. (2018). Robust Machine Comprehension Models via Adversarial Training. Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2018), pp. 575–81. Association for Computational Linguistics.Google Scholar
Wang, Y., Liu, K., Liu, J., He, W., Lyu, Y., Wu, H., Li, S. and Wang, H. (2018e). Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1918-27.CrossRefGoogle Scholar
Wang, Z., Liu, J., Xiao, X., Lyu, Y. and Wu, T. (2018f). Joint Training of Candidate Extraction and Answer Selection for Reading Comprehension. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pp. 1715–24.Google Scholar
Watarai, T. and Tsuchiya, M. (2020). Developing Dataset of Japanese Slot Filling Quizzes Designed for Evaluation of Machine Reading Comprehension. Proceedings of The 12th Language Resources and Evaluation Conference, pp. 6895-901.Google Scholar
Weissenborn, D., Wiese, G. and Seiffe, L. (2017). Making Neural QA as Simple as Possible but not Simpler. Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL), pp. 271–80. Vancouver, Canada.CrossRefGoogle Scholar
Welbl, J., Liu, N.F. and Gardner, M. (2017). Crowdsourcing multiple choice science questions. Proceedings of the 3rd Workshop on Noisy User-generated Text, pp. 94–106. Association for Computational Linguistics.Google Scholar
Welbl, J., Minervini, P., Bartolo, M., Stenetorp, P. and Riedel, S. (2020). Undersensitivity in neural reading comprehension. International Conference on Learning Representations (ICLR2020).CrossRefGoogle Scholar
Welbl, J., Stenetorp, P. and Riedel, S. (2018). Constructing datasets for multi-hop reading comprehension across documents. Transactions of the Association for Computational Linguistics 6, 287302.CrossRefGoogle Scholar
Wu, B., Huang, H., Wang, Z., Feng, Q., Yu, J. and Wang, B. (2019). Improving the Robustness of Deep Reading Comprehension Models by Leveraging Syntax Prior. Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 53-57.CrossRefGoogle Scholar
Wu, Z. and Xu, H. (2020). Improving the Robustness of Machine Reading Comprehension Model with Hierarchical Knowledge and Auxiliary Unanswerability Prediction. Knowledge-Based Systems: 106075.Google Scholar
Xia, J., Wu, C. and Yan, M. (2019). Incorporating Relation Knowledge into Commonsense Reading Comprehension with Multi-task Learning. Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pp. 2393–96. Beijing, China.CrossRefGoogle Scholar
Xie, P. and Xing, E. (2017). A constituent-centric neural architecture for reading comprehension. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1405-14.CrossRefGoogle Scholar
Xie, Q., Lai, G., Dai, Z. and Hovy, E. (2018). Large-scale Cloze Test Dataset Created by Teachers. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2344-56.CrossRefGoogle Scholar
Xiong, C., Zhong, V. and Socher, R. (2017). Dynamic coattention networks for question answering. Proceedings of the 5th International Conference on Learning Representations (ICLR).Google Scholar
Xiong, C., Zhong, V. and Socher, R. (2018). Dcn+: Mixed objective and deep residual coattention for question answering. Proceedings of the International Conference on Learning Representations (ICLR).Google Scholar
Xiong, W., Yu, M., Guo, X., Wang, H., Chang, S., Campbell, M. and Wang, W.Y. (2019). Simple yet Effective Bridge Reasoning for Open-Domain Multi-Hop Question Answering. Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 48-52. Hong Kong, China.CrossRefGoogle Scholar
Xu, Y., Liu, W., Chen, G., Ren, B., Zhang, S., Gao, S. and Guo, J. (2019a). Enhancing Machine Reading Comprehension With Position Information. IEEE Access 7, 141602141611.CrossRefGoogle Scholar
Xu, Y., Liu, X., Shen, Y., Liu, J. and Gao, J. (2019b). Multi-task Learning with Sample Re-weighting for Machine Reading Comprehension. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2644-55. Minneapolis, Minnesota.CrossRefGoogle Scholar
Yadav, M., Vig, L. and Shroff, G. (2017). Learning and Knowledge Transfer with Memory Networks for Machine Comprehension. Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pp. 850-59.CrossRefGoogle Scholar
Yan, M., Xia, J., Wu, C., Bi, B., Zhao, Z., Zhang, J., Si, L., Wang, R., Wang, W. and Chen, H. (2019). A deep cascade model for multi-document reading comprehension. Proceedings of the AAAI Conference on Artificial Intelligence, pp. 7354-61.CrossRefGoogle Scholar
Yan, M., Zhang, H., Jin, D. and Zhou, J.T. (2020). Multi-source Meta Transfer for Low Resource Multiple-Choice Question Answering. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7331-41.CrossRefGoogle Scholar
Yang, A., Wang, Q., Liu, J., Liu, K., Lyu, Y., Wu, H., She, Q. and Li, S. (2019a). Enhancing Pre-Trained Language Representations with Rich Knowledge for Machine Reading Comprehension. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2346-57.CrossRefGoogle Scholar
Yang, Y., Kang, S. and Seo, J. (2020). Improved Machine Reading Comprehension Using Data Validation for Weakly Labeled Data. IEEE Access 8, 56675677.CrossRefGoogle Scholar
Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. and Le, Q.V. (2019b). XLNet: Generalized Autoregressive Pretraining for Language Understanding. Advances in neural information processing systems, pp. 5753-63.Google Scholar
Yang, Z., Dhingra, B., Yuan, Y., Hu, J., Cohen, W.W. and Salakhutdinov, R. (2017a). Words or characters? fine-grained gating for reading comprehension. Proceedings of the 5th International Conference on Learning Representations, (ICLR), Toulon, France.Google Scholar
Yang, Z., Hu, J., Salakhutdinov, R. and Cohen, W.W. (2017b). Semi-supervised QA with generative domain-adaptive nets. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1040-50.CrossRefGoogle Scholar
Yang, Z., Qi, P., Zhang, S., Bengio, Y., Cohen, W., Salakhutdinov, R. and Manning, C.D. (2018). HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2369-80.CrossRefGoogle Scholar
Yao, J., Feng, M., Feng, H., Wang, Z., Zhang, Y. and Xue, N. (2019). Smart: A stratified machine reading test. CCF International Conference on Natural Language Processing and Chinese Computing, pp. 67-79. Springer.CrossRefGoogle Scholar
Yin, W., Ebert, S. and Schütze, H. (2016). Attention-based convolutional neural network for machine comprehension. Proceedings of 2016 NAACL Human-Computer Question Answering Workshop, pp. 15–21.CrossRefGoogle Scholar
Yu, A.W., Dohan, D., Luong, M.-T., Zhao, R., Chen, K., Norouzi, M. and Le, Q.V. (2018). QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension. Proceedings of the International Conference on Learning Representations (ICLR).Google Scholar
Yu, J., Zha, Z. and Yin, J. (2019). Inferential machine comprehension: Answering questions by recursively deducing the evidence chain from text. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2241-51.CrossRefGoogle Scholar
Yu, W., Jiang, Z., Dong, Y. and Feng, J. (2020). ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning. International Conference on Learning Representations (ICLR2020).Google Scholar
Yuan, F., Shou, L., Bai, X., Gong, M., Liang, Y., Duan, N., Fu, Y. and Jiang, D. (2020a). Enhancing Answer Boundary Detection for Multilingual Machine Reading Comprehension. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: 925–34.CrossRefGoogle Scholar
Yuan, F., Xu, Y., Lin, Z., Wang, W. and Shi, G. (2019). Multi-perspective Denoising Reader for Multi-paragraph Reading Comprehension. International Conference on Neural Information Processing, pp. 222-34. Springer.CrossRefGoogle Scholar
Yuan, X., Fu, J., Cote, M.-A., Tay, Y., Pal, C. and Trischler, A. (2020b). Interactive machine comprehension with information seeking agents. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: 2325–38.CrossRefGoogle Scholar
Yue, X., Gutierrez, B.J. and Sun, H. (2020). Clinical Reading Comprehension: A Thorough Analysis of the emrQA Dataset. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.CrossRefGoogle Scholar
Zhang, C., Luo, C., Lu, J., Liu, A., Bai, B., Bai, K. and Xu, Z. (2020a). Read, Attend, and Exclude: Multi-Choice Reading Comprehension by Mimicking Human Reasoning Process. Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1945-48.Google Scholar
Zhang, J., Zhu, X., Chen, Q., Ling, Z., Dai, L., Wei, S. and Jiang, H. (2017). Exploring question representation and adaptation with neural networks. Computer and Communications (ICCC), 2017 3rd IEEE International Conference on, pp. 1975-84. IEEE.CrossRefGoogle Scholar
Zhang, S., Zhao, H., Wu, Y., Zhang, Z., Zhou, X. and Zhou, X. (2020b). DCMN+: Dual co-matching network for multi-choice reading comprehension. AAAI CrossRefGoogle Scholar
Zhang, X. and Wang, Z. (2020). Reception: Wide and Deep Interaction Networks for Machine Reading Comprehension (Student Abstract). AAAI, pp. 13987-88.CrossRefGoogle Scholar
Zhang, X., Wu, J., He, Z., Liu, X. and Su, Y. (2018). Medical Exam Question Answering with Large-scale Reading Comprehension. The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18).CrossRefGoogle Scholar
Zhang, X., Yang, A., Li, S. and Wang, Y. (2019a). Machine Reading Comprehension: a Literature Review. arXiv preprint arXiv:1907.01686.Google Scholar
Zhang, Z., Wu, Y., Zhou, J., Duan, S., Zhao, H. and Wang, R. (2020c). SG-Net: Syntax-Guided Machine Reading Comprehension. AAAI, pp. 9636-43.Google Scholar
Zhang, Z., Zhao, H., Ling, K., Li, J., Li, Z., He, S. and Fu, G. (2019b). Effective subword segmentation for text comprehension. IEEE/ACM Transactions on Audio, Speech, and Language Processing 27, 16641674.CrossRefGoogle Scholar
Zheng, B., Wen, H., Liang, Y., Duan, N., Che, W., Jiang, D., Zhou, M. and Liu, T. (2020). Document Modeling with Graph Attention Networks for Multi-grained Machine Reading Comprehension. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 6708–18.CrossRefGoogle Scholar
Zhou, M., Huang, M. and Zhu, X. (2020a). Robust reading comprehension with linguistic constraints via posterior regularization. IEEE/ACM Transactions on Audio, Speech, and Language Processing 28.CrossRefGoogle Scholar
Zhou, X., Luo, S. and Wu, Y. (2020b). Co-Attention Hierarchical Network: Generating Coherent Long Distractors for Reading Comprehension. Proceedings of the AAAI Conference on Artificial Intelligence, pp. 9725-32.CrossRefGoogle Scholar
Zhu, H., Dong, L., Wei, F., Wang, W., Qin, B. and Liu, T. (2019). Learning to Ask Unanswerable Questions for Machine Reading Comprehension. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4238-48. Florence, Italy.CrossRefGoogle Scholar
Zhuang, Y. and Wang, H. (2019). Token-level dynamic self-attention network for multi-passage reading comprehension. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2252-62.CrossRefGoogle Scholar