Hostname: page-component-586b7cd67f-2plfb Total loading time: 0 Render date: 2024-11-23T02:07:56.086Z Has data issue: false hasContentIssue false

Computational speed-up by effective operators1

Published online by Cambridge University Press:  12 March 2014

Albert R. Meyer
Affiliation:
University of California, Berkeley, California 94720
Patrick C. Fischer
Affiliation:
University of California, Berkeley, California 94720

Extract

The complexity of a computable function can be measured by considering the time or space required to compute its values. Particular notions of time and space arising from variants of Turing machines have been investigated by R. W. Ritchie [14], Hartmanis and Stearns [8], and Arbib and Blum [1], among others. General properties of such complexity measures have been characterized axiomatically by Rabin [12], Blum [2], Young [16], [17], and McCreight and Meyer [10].

In this paper the speed-up and super-speed-up theorems of Blum [2] are generalized to speed-up by arbitrary total effective operators. The significance of such theorems is that one cannot equate the complexity of a computable function with the running time of its fastest program, for the simple reason that there are computable functions which in a very strong sense have no fastest programs.

Let φi be the ith partial recursive function of one variable in a standard Gödel numbering of partial recursive functions. A family Φ0, Φ1, … of functions of one variable is called a Blum measure on computation providing

(1) domain (φi) = domain (Φi), and

(2) the predicate [Φi(x) = m] is recursive in i, x and m.

Typical interpretations of Φi(x) are the number of steps required by the ith Turing machine (in a standard enumeration of Turing machines) to converge on input x, the space or number of tape squares required by the ith Turing machine to converge on input x (with the convention that Φi(x) is undefined even if the machine fails to halt in a finite loop), and the length of the shortest derivation of the value of φi(x) from the ith set of recursive equations.

Type
Research Article
Copyright
Copyright © Association for Symbolic Logic 1972

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

1

Some of the results reported here were presented in preliminary form at the Ninth Annual Symposium on Switching and Automata Theory (1968), sponsored by the IEEE Computer Group. This research was supported in part by the following grants: ARPA (SD-146) at Carnegie-Mellon University; NSF GP-7701; NRC (Canada) A-5549; and Nonr-4102(01) at Project MAC, M.I.T.

References

REFERENCES

[1]Arbib, M. A. and Blum, M., Machine dependence of degrees of difficulty, Proceedings of the American Mathematical Society, vol. 16 (1965), pp. 442447.CrossRefGoogle Scholar
[2]Blum, M., A machine-independent theory of complexity of recursive functions, Journal of the Association for Computing Machinery, vol. 14 (1967), pp. 322336.CrossRefGoogle Scholar
[3]Blum, M., On the size of machines. Information and Control, vol. 11 (1967), pp. 257265.CrossRefGoogle Scholar
[4]Blum, M., On effective procedures for speeding up algorithms, Journal of the Association of the Computing Machinery, vol. 18 (1971), pp. 290305.CrossRefGoogle Scholar
[5]Cobham, A., The intrinsic computational difficulty of functions, Proceedings of the 1964 International Congress on Logic, Methodology and Philosophy of Science, North-Holland, Amsterdam, 1964, pp. 2430.Google Scholar
[6]Constable, R. L., The operator gap, Technical Report 69–35, Computer Science Department, Cornell University, 1969.CrossRefGoogle Scholar
[7]Davis, M., Computability and unsolvability, McGraw-Hill, New York, 1958.Google Scholar
[8]Hartmanis, J. and Stearns, R. E., On the computational complexity of algorithms, Transactions of the American Mathematical Society, vol. 117 (1965), pp. 285306.CrossRefGoogle Scholar
[9]Helm, J. P. and Young, P. R., On size vs. efficiency for programs admitting speed-up, this Journal, vol. 36 (1971), pp. 2127.Google Scholar
[10]McCreight, E. M. and Meyer, A. R., Classes of computable functions defined by bounds on computation: preliminary report, Conference Record of the ACM Symposium on Theory of Computing, Association for Computing Machinery, New York, 1969, pp. 7988.Google Scholar
[11]Meyer, A. R. and Fischer, P. C., On computational speed-up, Conference Record of 1968 Ninth Annual Symposium on Switching and Automata Theory, Institute of Electrical and Electronics Engineers, Inc., New York, pp. 351355.Google Scholar
[12]Rabin, M. O., Degree of difficulty of computing a function and a partial ordering of recursive sets, Technical Report 2, Department of Mathematics, Hebrew University, Jerusalem, 1960.Google Scholar
[13]Ritchie, D. M., Program structure and computational complexity, Thesis, Division of Engineering and Applied Physics, Harvard University, 1968.Google Scholar
[14]Ritchie, R. W., Classes of predictably computable functions, Transactions of the American Mathematical Society, vol. 106 (1963), pp. 139173.CrossRefGoogle Scholar
[15]Rogers, H. Jr., Theory of recursive functions and effective computability, McGraw-Hill, New York, 1967.Google Scholar
[16]Young, P. R., Toward a theory of enumerations, Journal of the Association for Computing Machinery, vol. 16 (1969), pp. 328348.CrossRefGoogle Scholar
[17]Young, P. R., A note on “axioms” for computational complexity and computation of finite unctions, Technical Report CSD TR 39, Computer Sciences Department, Purdue University, 1969.Google Scholar