Hostname: page-component-586b7cd67f-rdxmf Total loading time: 0 Render date: 2024-12-02T22:12:43.716Z Has data issue: false hasContentIssue false

Parity still isn't a generalisation problem

Published online by Cambridge University Press:  01 April 1998

R. I. Damper
Affiliation:
Cognitive Sciences Centre and Department of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, [email protected] http://isis.ecs.soton.ac.uk/

Abstract

Clark & Thornton take issue with my claim that parity is not a generalisation problem, and that nothing can be inferred about back-propagation in particular, or learning in general, from failures of parity generalisation. They advance arguments to support their contention that generalisation is a relevant issue. In this continuing commentary, I examine generalisation more closely in order to refute these arguments. Different learning algorithms will have different patterns of failure: back-propagation has no special status in this respect. This is not to deny that a particular algorithm might fortuitously happen to produce the “intended” function in an (oxymoronic) parity-generalisation task.

Type
Continuing Commentary
Copyright
© 1998 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)