Published online by Cambridge University Press: 07 July 2009
AI researchers are becoming increasingly aware of the importance of reasoning about knowledge and belief. This paper reviews epistemic logic, a logic designed specifically for this type of reasoning. I introduce epistemic logic, and discuss some of the philosophical problems associated with it. I then compare two different styles of implementing theorem provers for epistemic logic. I also briefly discuss autoepistemic logic, a form of epistemic logic intended to model an agent's introspective reasoning, i.e. an agent's reasoning about its own beliefs. Finally, I discuss some of the proposals in the AI literature that are aimed at avoiding some of the philosophical problems that dog both epistemic and autoepistemic logic. This paper is not a full introduction to the field. Rather, it is intended to give the reader some flavour of the problems that research in this area faces, as well as some of the proposals for solving these problems.