Hostname: page-component-586b7cd67f-rcrh6 Total loading time: 0 Render date: 2024-12-02T21:03:34.335Z Has data issue: false hasContentIssue false

Computational Tractability and Conceptual Coherence: Why Do Computer Scientists Believe that P≠NP?

Published online by Cambridge University Press:  01 January 2020

Paul Thagard*
Affiliation:
University of Waterloo, Waterloo, ON, CanadaN2L 3G1

Extract

According to Church’s thesis, we can identify the intuitive concept of effective computability with such well-defined mathematical concepts as Turing computability and partial recursiveness. The almost universal acceptance of Church’s thesis among logicians and computer scientists is puzzling from some epistemological perspectives, since no formal proof is possible of a thesis that involves an informal concept such as effectiveness. Elliott Mendelson has recently argued, however, that equivalencies between intuitive notions and precise notions need not always be considered unprovable theses, and that Church’s thesis should be accepted as true.

I want to discuss a thesis that is nearly as important in current research in computer science as Church’s thesis. I call the newer thesis the tractability thesis, since it identifies the intuitive class of computationally tractable problems with a precise class of problems whose solutions can be computed in polynomial time. After briefly reviewing the theory of intractability, I compare the grounds for accepting the tractability thesis with the grounds for accepting Church's thesis. Intimately connected with the tractability thesis is the mathematical conjecture, whose meaning I shall shortly explain, that P≠NP. Unlike Church's thesis, this conjecture is precise enough to be capable of mathematical proof, but most computer scientists believe it even though no proof has been found. As we shall see below, understanding the grounds for acceptance of the conjecture that P≠NP has implications for general questions in the philosophy of mathematics and science, especially concerning the epistemological importance of explanatory and conceptual coherence.

Type
Research Article
Copyright
Copyright © The Authors 1993

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1 Mendelson, E.Second Thoughts about Church’s Thesis and Mathematical Proofs,’ Journal of Philosophy 87 (1990) 225-33CrossRefGoogle Scholar

2 Here ‘on the order of’ means that there is an upper bound on the time taken, within a constant factor. Everything in this section can be stated more precisely in terms of formal languages, but I keep my presentation informal. For a more precise account, see for example Garey, M. and Johnson, D. Computers and Intractability: A Guide to the Theory of NP-Completeness (New York: W.H. Freeman 1979)Google Scholar; or Cormen, T. Leiserson, C. and Rivest, R. Introduction to Algorithms (Cambridge, MA: MIT Press 1990)Google Scholar. I have drawn heavily on these two texts.

3 What ‘nondeterminism’ means here is best understood in terms of the theory of finite automata. A deterministic finite automaton has a transition function that exactly specifies given the automaton’s current state and input what the next state will be. In a nondeterministic automaton, the transition function specifies a set of states that the automaton can go into given its current state and input. Which of the set of possible next states the automaton will go into can only be guessed.

4 Cook, S.The Complexity of Theorem Proving Procedures,’ in Proceedings of the Third Annual ACM Symposium on Theory of Computing (Association for Computing Machinery 1971) 151-8Google Scholar. For a very brief historical sketch, see Cormen et al., Introduction to Algorithms, 963.

5 Allemang, D. Tanner, M. Bylander, T. and Josephson, J.Computational Complexity of Hypothesis Assembly,’ Proceedings of the Tenth International Joint Conference on Artificial Intelligence, Volume 2 (San Mateo: Morgan Kaufmann 1987), 1112-17Google Scholar

6 Cooper, G.The Computational Complexity of Probabilistic Inference Using Bayesian Belief Networks,’ Artificial Intelligence 42 (1990) 393-405CrossRefGoogle Scholar

7 Introduction to Algorithms, 927; cf. Garey and Johnson, Computers and Intractability, 33

8 Cormen et al., Introduction to Algoritms, 916; cf. Garey and Johnson, Computers and Intractability, 33

9 Harman, G. Thought (Princeton: Princeton University Press 1973)Google Scholar; Harman, G. Change in View: Principles of Reasoning (Cambridge, MA: MIT Press/Bradford Books 1986)Google Scholar; Thagard, P.The Best Explanation: Criteria for Theory Choice,’ Journal of Philosophy 75 (1978) 76-92CrossRefGoogle Scholar; Thagard, P. Computational Philosophy of Science (Cambridge, MA: MIT Press/Bradford Books)Google Scholar

10 Garey and Johnson, Computers and Intractability, 185; cf. Hopcroft, J. and Ullman, J. Introduction to Automata Theory, Limguages, and Computation (Reading, MA: Addison-Wesley 1979), 362Google Scholar

11 I have elsewhere developed a computational model of how to calculate the acceptability of beliefs in networks such as the one in Figure 2. Beliefs can be represented by nodes that are connected by excitatory and inhibitory links based on the explanatory relationships of the beliefs. A connectionist algorithm efficiently determines what beliefs are accepted and rejected. The theory of explanatory coherence and the connectionist program ECHO have been applied to many important cases in the history of science and in everyday life. See Thagard, P.Explanatory Coherence,’ Behavioral and Brain Sciences 12 (1989) 435-67CrossRefGoogle Scholar. See also Thagard, P. Conceptual Revolutions (Princeton: Princeton University Press 1992)CrossRefGoogle Scholar.

12 Lewis, H. and Papadimitriou, C. Elements of the Theory of Computation (Englewood Cliffs, NJ: Prentice-Hall 1981), 329Google Scholar

13 Rogers, H.The Present Theory of Turing Machine Computability,’ in Hintikka, J. ed., The Philosophy of Mathematics (Oxford: Oxford University Press 1969), 133fGoogle Scholar.

14 Cormen et al., Introduction to Algoritms, 917

15 Cherniak, C. Minimal Rationality (Cambridge, MA: MIT Press 1986), 92-3Google Scholar. This book contains a fine discussion of the epistemological relevance of complexity theory, particularly with respect to requiring a belief system to be consistent. It incorporates Cherniak, C.Conceptual Complexity and the Universal Acceptance of Logic,’ Journal of Philosophy 81 (1984) 739-58CrossRefGoogle Scholar.

16 Carmen et al., Introduction to Algorithms, 439

17 Kitcher, P. The Nature of Mathematical Knowledge (Oxford: Oxford University Press 1983), 221Google Scholar

18 Being in NP is not to be identified with intractability, however, since there are worse cases of intractability than those in NP, involving problems that require time on nondeterministic machines that is exponential or worse.

19 For an insightful psychological discussion of conceptual coherence and how it goes beyond similarity of instances, see Murphy, G. and Medin, D.The Role of Theories in Conceptual Coherence,’ Psychological Review 92 (1985) 289-316CrossRefGoogle ScholarPubMed. Quine’s essay ‘Natural Kinds’ made a similar point; see Quine, W.V.O. Ontological Relativity and Other Essays (New York: Columbia University Press 1969)CrossRefGoogle Scholar.

20 On the structure of conceptual systems and how they change, see P. Thagard, Conceptual Revolutions.

21 P. Thagard, ‘Explanatory Coherence’