Knowledge collection systems often assume they are cooperating with an unbiased expert. They have few functions for checking and fixing the realism of the expertise transferred to the knowledge base, plan, document or other product of the interaction. The same problem arises when human knowledge engineers interview experts. The knowledge engineer may suffer from the same biases as the domain expert. Such biases remain in the knowledge base and cause difficulties for years to come.
To prevent such difficulties, this paper introduces the reader to “critic engineering”, a methodology that is useful when it is necessary to doubt, trap and repair expert judgment during a knowledge collection process. With the use of this method, the human expert and knowledge-based critic form a cooperative system. Neither agent alone can complete the task as well as the two together.
The methodology suggested here offers a number of extensions to traditional knowledge engineering techniques. Traditional knowledge engineering often answers the questions delineated in generic task (GT) theory, yet GT theory fails to provide four additional sets of questions that one must answer to engineer a knowledge base, plan, design or diagnosis when the expert is prone to error. This extended methodology is called “critic engineering”.