Published online by Cambridge University Press: 07 February 2011
Designers who are experts in a given design domain are well known to be able to Immediately focus on “good designs,” suggesting that they may have learned additional constraints while exploring the design space based on some functional aspects. These constraints, which are often implicit, result in a redefinition of the design space, and may be crucial for discovering chunks or interrelations among the design variables. Here we propose a machine-learning approach for discovering such constraints in supervised design tasks. We develop models for specifying design function in situations where the design has a given structure or embodiment, in terms of a set of performance metrics that evaluate a given design. The functionally feasible regions, which are those parts of the design space that demonstrate high levels of performance, can now be learned using any general purpose function approximator. We demonstrate this process using examples from the design of simple locking mechanisms, and as in human experience, we show that the quality of the constraints learned improves with greater exposure in the design space. Next, we consider changing the embodiment and suggest that similar embodiments may have similar abstractions. To explore convergence, we also investigate the variability in time and error rates where the experiential patterns are significantly different. In the process, we also consider the situation where certain functionally feasible regions may encode lower dimensional manifolds and how this may relate to cognitive chunking.