No CrossRef data available.
Published online by Cambridge University Press: 28 February 2022
The performance of a connectionist learning system on a simple problem has been described by Hinton and is briefly reviewed here: a finite function is learned, and the system generalizes correctly from partial information by finding simple “features” of the environment. For comparison, a very similar problem is formulated in the Gold paradigm of discrete learning functions. Identification in the limit from positive text of a large class of functions including Hinton's is achievable with a trivial, conservative learning strategy. Using Valiant's approach, we place an arbitrary finite bound on function complexity and then we can guarantee text and resource efficiency relative to a probabilistic criterion of success. But the connectionist system generalizes. That is, it uses a non-conservative learning strategy. We define a simple, non-conservative strategy that also generalizes like the connectionist system, finding simple “features” of the environment.
I am grateful to Geoffrey Hinton, Edward Stabler, William Demopoulos, Daniel Osherson, and Zenon Pylyshyn for helpful discussions of this material. This research was supported in part by the Canadian Institute for Advanced Research and the Natural Science and Engineering Research Council of Canada.