Hostname: page-component-586b7cd67f-dlnhk Total loading time: 0 Render date: 2024-11-23T20:31:30.940Z Has data issue: false hasContentIssue false

Incremental and Iterative Learning of Answer Set Programs from Mutually Distinct Examples

Published online by Cambridge University Press:  10 August 2018

ARINDAM MITRA
Affiliation:
Arizona State University (e-mail: [email protected], [email protected])
CHITTA BARAL
Affiliation:
Arizona State University (e-mail: [email protected], [email protected])
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Over the years the Artificial Intelligence (AI) community has produced several datasets which have given the machine learning algorithms the opportunity to learn various skills across various domains. However, a subclass of these machine learning algorithms that aimed at learning logic programs, namely the Inductive Logic Programming algorithms, have often failed at the task due to the vastness of these datasets. This has impacted the usability of knowledge representation and reasoning techniques in the development of AI systems. In this research, we try to address this scalability issue for the algorithms that learn answer set programs. We present a sound and complete algorithm which takes the input in a slightly different manner and performs an efficient and more user controlled search for a solution. We show via experiments that our algorithm can learn from two popular datasets from machine learning community, namely bAbl (a question answering dataset) and MNIST (a dataset for handwritten digit recognition), which to the best of our knowledge was not previously possible. The system is publicly available at https://goo.gl/KdWAcV.

Type
Original Article
Creative Commons
This is a work of the U.S. Government and is not subject to copyright protection in the United States.
Copyright
Copyright © Cambridge University Press 2018

References

Athakravi, D., Alrajeh, D., Broda, K., Russo, A., and Satoh, K. 2015. Inductive learning using constraint-driven bias. In Inductive Logic Programming, pp. 1632. Springer, Cham.Google Scholar
Athakravi, D., Corapi, D., Broda, K., and Russo, A. 2013. Learning through hypothesis refinement using answer set programming. In International Conference on Inductive Logic Programming, pp. 3146. Springer.Google Scholar
Dai, W.-Z., Muggleton, S. H., and Zhou, Z.-H. 2015. Logical vision: Meta-interpretive learning for simple geometrical concepts. In ILP (Late Breaking Papers), pp. 1–16.Google Scholar
Gelfond, M. and Lifschitz, V. 1988. The stable model semantics for logic programming. In ICLP/SLP, Volume 88, pp. 10701080.Google Scholar
Gulwani, S., Hernandez-Orallo, J., Kitzelmann, E., Muggleton, S., Schmid, U., and Zorn, B. 2015. Inductive programming meets the real world. Communications of the ACM 58, 11, 9099.Google Scholar
Katzouris, N., Artikis, A., and Paliouras, G. 2015. Incremental learning of event definitions with inductive logic programming. Machine Learning 100, 2–3, 555585.Google Scholar
Katzouris, N., Artikis, A., and Paliouras, G. 2017. Distributed online learning of event definitions. CoRR abs/1705.02175.Google Scholar
Kazmi, M., Schüller, P. and Saygin, Y. 2017. Improving scalability of inductive logic programming via pruning and best-effort optimisation. Expert Systems with Applications.Google Scholar
Law, M., Russo, A., and Broda, K. 2014. Inductive learning of answer set programs. In European Workshop on Logics in Artificial Intelligence, pp. 311325. Springer, Cham.Google Scholar
Law, M., Russo, A., and Broda, K. 2015. Learning weak constraints in answer set programming. Theory and Practice of Logic Programming 15, 4–5, 511525.Google Scholar
Law, M., Russo, A., and Broda, K. 2016. Iterative learning of answer set programs from context dependent examples. Theory and Practice of Logic Programming 16, 5–6, 834848.Google Scholar
LeCun, Y. 1998. The mnist database of handwritten digits. http://yann.lecun.com/exdb/mnist/.Google Scholar
Mitra, A. and Baral, C. 2016. Addressing a question answering challenge by combining statistical methods with inductive rule learning and reasoning. In AAAI, pp. 2779–2785.Google Scholar
Muggleton, S. 1991. Inductive logic programming. New generation computing 8, 4, 295318.Google Scholar
Muggleton, S. 1995. Inverse entailment and progol. New generation computing 13, 3–4, 245286.Google Scholar
Otero, R. 2001. Induction of stable models. Inductive Logic Programming, 193–205.Google Scholar
Ray, O. 2009. Nonmonotonic abductive inductive learning. Journal of Applied Logic 7, 3, 329340.Google Scholar
Sakama, C. 2005. Induction from answer sets in nonmonotonic logic programs. ACM Trans. Comput. Logic 6, 2 (April), 203231.Google Scholar
Sakama, C. and Inoue, K. 2009. Brave induction: a logical framework for learning from incomplete information. Machine Learning 76, 1 (Jul), 335.Google Scholar
Schüller, P. and Kazmi, M. 2017. Best-effort inductive logic programming via fine-grained cost-based hypothesis generation. arXiv preprint arXiv:1707.02729.Google Scholar
Wan, L., Zeiler, M., Zhang, S., Le Cun, Y., and Fergus, R. 2013. Regularization of neural networks using dropconnect. In International Conference on Machine Learning, pp. 1058–1066.Google Scholar
Weston, J., Bordes, A., Chopra, S., and Mikolov, T. 2015. Towards ai-complete question answering: a set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698.Google Scholar
Supplementary material: PDF

Mitra and Baral supplementary material

Mitra and Baral supplementary material 1

Download Mitra and Baral supplementary material(PDF)
PDF 187.7 KB