Hostname: page-component-586b7cd67f-g8jcs Total loading time: 0 Render date: 2024-11-24T10:41:18.519Z Has data issue: false hasContentIssue false

Answering engineers' questions using semantic annotations

Published online by Cambridge University Press:  19 March 2007

SANGHEE KIM
Affiliation:
Engineering Design Centre, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
ROB H. BRACEWELL
Affiliation:
Engineering Design Centre, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
KEN M. WALLACE
Affiliation:
Engineering Design Centre, Department of Engineering, University of Cambridge, Cambridge, United Kingdom

Abstract

Question–answering (QA) systems have proven to be helpful, especially to those who feel uncomfortable entering keywords, sometimes extended with search symbols such as +, *, and so forth. In developing such systems, the main focus has been on the enhanced retrieval performance of searches, and recent trends in QA systems center on the extraction of exact answers. However, when their usability was evaluated, some users indicated that they found it difficult to accept the answers because of the absence of supporting context and rationale. Current approaches to address this problem include providing answers with linking paragraphs or with summarizing extensions. Both methods are believed to be sufficient to answer questions seeking the names of objects or quantities that have only a single answer. However, neither method addresses the situation when an answer requires the comparison and integration of information appearing in multiple documents or in several places in a single document. This paper argues that coherent answer generation is crucial for such questions, and that the key to this coherence is to analyze texts to a level beyond sentence annotations. To demonstrate this idea, a prototype has been developed based on rhetorical structure theory, and a preliminary evaluation has been carried out. The evaluation indicates that users prefer to see the extended answers that can be generated using such semantic annotations, provided that additional context and rationale information are made available.

Type
Research Article
Copyright
© 2007 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

80-20 Software. (2003). 80-20 Retriever Enterprise Edition. Accessed at http://www.80-20.com/brochures/Personal_Email_Search_Solution.pdf
Ahmed, S. (2005). Encouraging reuse of design knowledge: a method to index knowledge. Design Studies Journal 26(6), 565592.Google Scholar
Ahmed, S., Kim, S., & Wallace, K.M. (2005). A methodology for creating ontologies for engineering design. Proc. ASME 2005 Int. Design Engineering Technical Conf. Computers and Information in Engineering, Paper No. DETC 2005-84729.
Allen, J. (1987). Natural Language Understanding. London: Benjamin/Cummings.
Aunimo, L. & Kuuskoski, R. (2005). Question answering using semantic annotation. Proc. Cross Language Evaluation Forum (CLEF).
Barzilay, R., McKeown, K.R., & Elhadad, M. (1999). Information fusion in the context of multi-document summarization. Proc. Annual Computational Language, pp. 550557.
Bosma, W. (2005). Extending answers using discourse structure. Proc. Workshop on Crossing Barriers in Text Summarization Research in RANLP, pp. 29.
Bracewell, R.H. & Wallace, K.M. (2003). A tool for capturing design rationale. Proc. 14th Int. Conf. Engineering Design, pp. 185186, Stockholm.
Bracewell, R.H., Ahmed, S., & Wallace, K.M. (2004). DRed and design folders: a way of capturing, storing and passing on knowledge generated during design projects. Proc. ASME Design Automation Conf.
Brill, E., Lin, J., Banko, M., Dumais, S.T., & Ng, A.Y. (2001). Data-intensive question answering. Proc. Tenth Text Retrieval Conf. (TREC 2001), pp. 183189.
Burger, J., Cardie, C., Chaudhri, V., Gaizauskas, R., and et al. (2001). Issues, Tasks and Program Structures to Roadmap research in Question & Answering (QA). Washington, DC: NIST.
Burstein, J., Marcu, D., & Knight, K. (2003). Finding the WRITE stuff: automatic identification of discourse structure in student essays. IEEE Intelligent Systems, 18 (January/February) 3239.Google Scholar
Diekema, A.R., Yilmazel, O., Chen, J., Harwell, S., He, L., & Liddy, E.D. (2004). Finding answers to complex questions. In New Directions in Question Answering (Maybury, M.T., Ed.), pp. 141152. Cambridge, MA: AAAI–MIT Press.
Franz, M. & Roukos, S. (1994). TREC-6 ad-hoc retrieval. Proc. Sixth Text Retrieval Conf. (TREC-6), pp. 511516.
Hai, D. & Kosseim, L. (2004). The problem of precision in restricted-domain question-answering: some proposed methods of improvement. Proc. Workshop on Question Answering in Restricted Domains in ACL, pp. 815, Barcelona.
Hickl, A., Lehmann, J., Williams, J., & Harabagiu, S. (2004). Experiments with interactive question answering in complex scenarios. Proc. North American Chapter of the Association for Computational Linguistics Annual Meeting (HLT-NAACL).
Hovy, E.H. (1993). Automated discourse generation using discourse structure relations. Artificial Intelligence 63(1–2), 341385.Google Scholar
Jing, H. & McKeown, K.R. (2000). Cut and paste based text summarization. Proc. 1st Meeting of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL), pp. 178185.
Kim, S., Bracewell, R.H., & Wallace, K.M. (2004). From discourse analysis to answering design questions. Proc. Int. Workshop on the Application of Language and Semantic Technologies to Support Knowledge Management Processes, pp. 4349.
Kim, S., Ahmed, S., & Wallace, K.M. (2006a). Improving document accessibility through ontology-based information sharing. Proc. Int. Symp. Series on Tools and Methods of Competitive Engineering, pp. 923934.
Kim, S., Bracewell, R.H., Ahmed, S., & Wallace, K.M. (2006b). Semantic annotation to support automatic taxonomy classification. Proc. Int. Design Conf. (Design 2006), pp. 11711178.
Knott, A. & Dale, R. (1995). Using linguistic phenomena to motivate a set of coherence relations. Discourse Processes 18(1), 3562.Google Scholar
Kunz, W. & Rittel, H.W.J. (1970). Issues as Elements of Information Systems, pp. 55169, Working Paper 131, Center for Planning and Development Research, Berkeley, CA.
Kwok, C., Etzioni, O., & Weld, D. S. (2001). Scaling question answering to the Web. Proc. 10th Int. Conf. World Wide Web, pp. 150161, Hong Kong.
Liddy, E.D. (1998). Enhanced text retrieval using natural language processing. Bulletin of the American Society for Information Science and Technology 24(4), 1416.Google Scholar
Lin, J., Quan, D., Sinha, V., Bakshi, K., Huynh, D., Katz, B., & Karger, D.R. (2003). What makes a good answer? The role of context in question answering. Proc. IFIP TC13 Ninth Int. Conf. Human–Computer Interaction.
Lopez, V., Pasin, M., & Motta, E. (2005). AquaLog: an ontology-portable question answering system for the semantic Web. Proc. Second European Semantic Web Conf. (ESWC), pp. 546562.
Mani, I. & Maybury, M. (1999). Advances in Automatic Text Summarisation. Cambridge, MA: MIT Press.
Mann, W. & Thompson, S. (1988). Rhetorical structure theory: toward a functional theory of text organization. Text 8(3), 243281Google Scholar
Marcu, D. (1999). Discourse trees are good indicators of importance in text. In Advances in Automatic Text Summarization (Mani, I. & Maybury, M., Eds.). Cambridge, MA: MIT Press.
Marcu, D. & Echihabi, A. (2002). An unsupervised approach to recognising discourse relations. Proc. 40th Annual Meeting of the Association for Computational Linguistics, pp. 368375.
Marcus, M.P., Santorini, B., & Marcinkiewicz, M.A. (1993). Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics 19(2), 313330.Google Scholar
Miller, G.A., Beckwith, R.W., Fellbaum, C., Gross, D., & Miller, K. (1993). Introduction to wordnet: an on-line lexical database. International Journal of Lexicography 3(4), 235312.Google Scholar
Mukherjee, R. & Mao, J. (2004). Enterprise search tough stuff. ACM Queue 2(2), 3646.Google Scholar
Nyberg, E., Mitamura, T., Frederking, R., Pedro, V., Bilotti, M., Schlaikjer, A., & Hannan, K. (2005). Extending the JAVELIN QA system with domain semantics. Proc. Workshop on Question Answering in Restricted Domains at AAAI.
O'Donnell, M. (2000). RSTTool 2.4—a markup tool for rhetorical structure theory. Proc. Int. Natural Language Generation Conf. (INLG'2000), pp. 253256.
Resnik, P. (1995). Using information content to evaluate semantic similarity in taxonomy. Proc. 14th Int. Joint Conf. Artificial Intelligence, pp. 448453.
Robertson, S.E., Walker, S., Jones, S., & Hancock-Beaulieu, M.G. (1995). Okapi at TREC-3. Proc. Third Text Retreval Conf. (TREC-3), pp. 550225.
Salton, G. (1989). Advanced information-retrieval models. In Automatic Text Processing (Salton, G., Ed.), chap. 10. Reading, MA: Addison–Wesley.
Sekine, S. & Grishman, R. (2001). A corpus-based probabilistic grammar with only two non-terminals. Proc. Fourth Int. Workshop on Parsing Technologies, pp. 216223.
Taboada, M. & Mann, W. (2006). Rhetorical structure theory: looking back and moving ahead. Discourse Studies 8(3), 423459.Google Scholar
Teufel, S. (2001). Task-based evaluation of summary quality: describing relationships between scientific papers. Proc. Int. Workshop on Automatic Summarization at NAACL.
Voorhess, E.M. (2002). Overview of the TREC 2002 question answering track. Proc. Text Retrieval Conf. (TREC).
Williams, S. & Reiter, E. (2003). A corpus analysis of discourse relations for natural language generation. Proc. Corpus Linguistics, pp. 899908.