Hostname: page-component-586b7cd67f-r5fsc Total loading time: 0 Render date: 2024-12-02T21:06:27.806Z Has data issue: false hasContentIssue false

Coherence and correspondence in engineering design: informing the conversation and connecting with judgment and decision-making research

Published online by Cambridge University Press:  01 January 2023

Konstantinos V. Katsikopoulos*
Affiliation:
Max Planck Institute for Human Development & Massachusetts Institute of Technology
*
* Address: Konstantinos V. Katsikopoulos, Center for Adaptive Behavior and Cognition, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany. Email: [email protected].
Rights & Permissions [Opens in a new window]

Abstract

I show how the coherence/correspondence distinction can inform the conversation about decision methods for engineering design. Some engineers argue for the application of multi-attribute utility theory while others argue for what they call heuristics. To clarify the differences among methods, I first ask whether each method aims at achieving coherence or correspondence. By analyzing statements in the design literature, I argue that utility theory aims at achieving coherence and heuristics aim at achieving correspondence. Second, I ask if achieving coherence always implies achieving correspondence. It is important to provide an answer because while in design the objective is correspondence, it is difficult to assess it, and coherence that is easier to assess is used as a surrogate. I argue that coherence does not always imply correspondence in design and that this is also the case in problems studied in judgment and decision-making research. Uncovering the conditions under which coherence implies, or does not imply, correspondence is a topic where engineering design and judgment and decision-making research might connect.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2009] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Introduction

Kenneth Hammond (1996, 2007) has pointed out that a method can be evaluated both according to its internal consistency, or coherence, and its external performance, or correspondence. It is important to keep this distinction in mind when comparing decision methods because one method could be achieving coherence while another method could be achieving correspondence. For example, the take-the-best heuristic (Reference Katsikopoulos and FasoloGigerenzer & Goldstein, 1996) violates a criterion of coherence (transitivity) that is satisfied by linear regression, while, under some conditions, take-the-best outperforms regression in a criterion of correspondence (predictive accuracy). In this article, I show how the coherence/correspondence distinction can inform the conversation about decision methods within a field that has had minimal overlap with JDM (judgment and decision making), the field of engineering design.

In 1999, National Science Foundation (NSF) engineering design program director George Hazelrigg wrote: “It is increasingly recognized that engineering design [is a] decision-intensive process” (p. 342). The NSF has, since 1996, sponsored numerous workshops on decision-based design. The Accreditation Board for Engineering and Technology also defines engineering design as a decision-making process.

What decisions do design engineers make? Design engineers choose among alternative concepts. A design concept is a technical specification of an artifact that is detailed enough so that the engineer can predict, reasonably accurately, how the artifact will function. For example, a concept of a chair would specify the material used to build each chair part and the geometrical relationships among the parts. There are typically many attributes on which design concepts can be evaluated. Examples of attributes for a chair concept are durability, comfort, or production cost. Hereafter, I refer to design concepts as simply designs.

Some engineers (Reference ThurstonThurston, 1991; Reference Thurston2001) argue for the application of multi-attribute utility theory for choosing among designs, while others argue for what they call heuristics, such as Stuart Pugh’s convergence process (Reference PughPugh, 1981; Reference Pugh1990). This debate has by and large ignored the coherence/correspondence distinction. I ask two questions that use the distinction and inform the debate.

The first question is whether each method aims at achieving coherence or correspondence (or both). By analyzing published statements in the design literature, I argue that multi-attribute utility theory aims at achieving coherence while the Pugh convergence process aims at achieving correspondence.

The second question is if achieving coherence always implies achieving correspondence. It is important to answer because while in design the objective of decision-making is correspondence, it is difficult to assess it, and coherence that is easier to assess is used as a surrogate. For the surrogate (coherence) to be useful for inferring the objective (correspondence), the relationship between the two must be known. I argue that coherence does not always imply correspondence in design, and that this is also the case in decision problems studied in JDM. I conclude that the study of conditions under which coherence implies, or does not imply, correspondence is a topic where design and JDM research might connect.

Before asking and answering the two questions, I review two methods for making decisions that have a prominent place in engineering design theory and practice.

2 Decision methods in engineering design: Multi-attribute utility theory and the Pugh convergence process

Deborah Thurston (1991) proposed that Keeney and Raiffa’s (Reference Keeney and Raiffa1976; Reference Keeney and Raiffa1993) multi-attribute utility theory, be applied to design. This theory satisfies the following three principles (for more details see Reference Katsikopoulos and GigerenzerKatsikopoulos & Gigerenzer, 2008).

  1. (1) Worth of Designs. For the decision-maker, each design has a worth associated with it, measured by a numerical value (i.e., utility).

  2. (2) Absolute Evaluation of Designs. The worth of a design to the decision-maker is determined in an absolute way (i.e., without considering the other designs), by using a function that maps the attributes of the design to its utility.

  3. (3) Assessment of Utilities. The utility function is assessed by questioning the decision-maker about her or his preference structure over the design space (i.e., the utility function is not stated explicitly but revealed through the answers).

Thurston writes: “Utility analysis cannot be the only analytic tool employed in design” (Reference ThurstonThurston, 2001, p. 182). Other decision methods used in design can be described in terms of whether they satisfy the three principles of multi-attribute utility theory or not.

In the rating/weighting method (Reference Scott and AntonssonScott & Antonsson, 1999), the worth of a design is calculated by adding the attribute levels of the designs, multiplied by the weight of each attribute. This method is the analogue of the weighted linear model in JDM research. The rating/weighting method conforms to (1) and (2). But it violates (3) because the weights of the attributes must be stated explicitly by the design engineers. Saaty’s (1980) Analytic Hierarchy Process (AHP) conforms to (1) but dispenses with (2) and (3): AHP calculates a measure of worth for each design but this measure takes into account the other designs, using the explicit statements of the engineer.

There is a method that dispenses with all three principles of multi-attribute utility theory. In Pugh’s (Reference Pugh1981, Reference Pugh1990) convergence process, a group of experts is asked to compare each design with a benchmark. For each attribute, they judge whether the design is equally good (0), better (+1), or worse (–1) than the benchmark. For example, the durability of a new chair may be compared to the durability of a benchmark swivel chair. Even though it can sometimes be determined experimentally which one of two chairs is more durable, the judgments are the opinions of experts. These judgments are not weighted or summed up. Pugh (1990, p. 77, emphasis in the original) writes: “The scores or numbers…must not be summed algebraically.” That is, there is no design worth, violating (1), (2), and (3).

The numbers that appear in the Pugh process are there to probe. The idea is that if a design is worse on a particular attribute, this is a stimulus to think whether it can be improved, and to generate new designs. Designers are encouraged to change existing designs and create new ones. On each iteration, all designs, including the ones created throughout the process, are compared to the benchmark. Through conversation among designers, the benchmark of the next iteration emerges. When it becomes clear that a design is best, the process is terminated.

In the JDM literature, there are many opinions on what is and is not a heuristic, and on what it means to fully specify a heuristic. It seems that, in the design literature, the Pugh process is considered to be a heuristic. For example, Franssen (2005, p. 55) writes: “… Pugh … method … is only a ‘heuristic tool.’ ” Frey et al. (2009) explicitly label the Pugh process as a heuristic.

Connections can be drawn between the decision methods that designers use and fast and frugal heuristics (Reference Magee and FreyMagee & Frey, 2006).Footnote 1 While it may be a stretch to call it a fast and frugal heuristic, the Pugh process shares some practices with methods such as take-the-best (Reference Katsikopoulos and FasoloGigerenzer & Goldstein, 1996). For example, both use pairwise comparisons and ignore some pieces of information (e.g., attribute weights). Interestingly, these practices are often avoided in design decision-making (Reference Saari and SiebergSaari & Sieberg, 2004).

I reviewed two methods for making decisions in design: multi-attribute utility theory and the Pugh convergence process. In sum, in utility theory the decision-maker deliberates and may secure accountability, while in the Pugh process the decision makers rely on intuition and aim at boosting creativity. In the next section, I sample and analyze statements, in the design literature, about the two methods. The goal is to examine if utility theory and the Pugh process aim at achieving coherence, correspondence, or both.

3 Multi-attribute utility theory aims at achieving coherence; the Pugh process aims at achieving correspondence

I first define what it means to achieve coherence and correspondence in design decision-making.

3.1 Achieving coherence in design decision-making

Hammond (2007, p. xvi) defines coherence as “the consistency of the elements of the person’s judgment.” I use the same definition for coherence in engineering design.

For example, suppose that an engineer chooses chair design A. To evaluate the coherence of this decision, it needs to be checked whether the statements the engineer made in order to decide for A were internally consistent or not. If the engineer maintained that design A would be chosen over design B if C were considered as an another possible design, but B would be chosen over design A if C were not considered, she has violated a coherence requirement called the Independence of Irrelevant Alternatives. Another instance of failure of coherence is Intransitivity where A is chosen over B, B is chosen over C, and C is chosen over A. Some coherence requirements are related to the three principles of multi-attribute utility theory. The worth-of-designs principle implies transitivity and the absolute-evaluation-of-designs principle implies independence of irrelevant alternatives.

3.2 Achieving correspondence in design decision-making

I use Hammond’s definition of correspondence. He writes: “There are two general ways we evaluate another person’s judgments [coherence and correspondence]. One is to ask if they are empirically correct: When someone judges this tree to be ten feet tall, will a yardstick prove that she is right?” (Reference HammondHammond, 2007, p. xvi).

A difference between coherence and correspondence is that in coherence the criterion is internal (logical consistency) while in correspondence the criterion is external (success in the real world). While criteria of logic are essentially domain-independent, criteria of correspondence depend on the decision problem.

In engineering, correspondence is typically achieved by a design that “works.”Footnote 2 The user determines what requirements the design must satisfy so that it can be said to work. In engineering jargon, these are called functional requirements. After the user articulates how she wants the artifact to function, the engineer expresses the functional requirements in technical terms, often as mathematical constraints involving attributes.

For example, consider a user who says that she wants an office chair that “will last for some time.” A technical description of the functional requirement is that “the time to failure exceeds 10,000 hours of use by a female with physical characteristics that are within the middle 99% of the normal adult range.”

3.3 The Pugh process aims at correspondence

For more than fifty years, economists have been discussing decision theories in terms of axioms (Allais, Reference Allais1952/1979; Reference StarmerStarmer, 2000). Decision theories can be descriptive (what does a person do?), normative (what should an ideal person do?), and prescriptive (what should a real person do?), and it has been argued that the common ground of all three is a set of axioms (Reference Luce and von WinterfeldtLuce & von Winterfeldt, 1994).

Engineers have also discussed decision methods in terms of axioms. For example, Thurston (2006, p. 19) labels axioms as “rules for clear thinking.” A common argument against heuristic methods is that, under some conditions, they violate some axioms.

Franssen (2005, p. 55) presents an existence proof for the claim that, depending on the initial benchmark, the design chosen by the Pugh process may vary. This is a violation of independence of irrelevant alternatives. Franssen labels such violations as “difficulties.”

Hazelrigg (1996) has criticized the group decision-making aspects of the Pugh process because they can lead to violations of independence of irrelevant alternatives as well. To argue this, Hazelrigg uses Arrow’s (1950) impossibility theorem. Informally, the theorem says that there is no method that translates the single preference orders of more than three designers to a “group” preference order, so that five axioms are always jointly satisfied. One of these axioms is the independence of irrelevant alternatives.

Hazelrigg (1996, p. 161) uses Arrow’s theorem to conclude that “the methods of … Quality Function Deployment (QFD)Footnote 3 can lead to highly erroneous results.” Franssen (2005, p. 55) writes: “[the Pugh process] does not meet Arrow’s requirement” and, “Presumably, because he is well aware of difficulties like these, Pugh issues a warning that his method … is only a ‘heuristic tool’.”Footnote 4

These authors are correct in pointing out that the Pugh process does not aim at achieving coherence. Nowhere in the writings of Pugh is there a concern with adhering to the axioms of decision theories.Footnote 5 On the other hand, aiming at correspondence, in the title of his paper, Pugh (1981, emphasis added) introduced process convergence as “a method that works.” Similarly, Frey et al. (2009) and Reference Clausing and KatsikopoulosClausing and Katsikopoulos (2008) argued in favor of the Pugh process by saying that it can lead to success in real design problems.

Failing to acknowledge that the Pugh process aims at achieving correspondence but not coherence, has led to conversations where design researchers talk past each other. Consider the unrestricted-domain axiom (that features in Arrow’s theorem): “Each member of the design group is free to rank designs in any way.” Reference Scott and AntonssonScott and Antonsson (1999) argue that it is not obvious that this axiom should be considered true in design. For example, they say that when the bending stiffness of three designs, A, B, and C has the values of 3,000, 3,2000, and 3,400 N/mm, respectively, it cannot be that a designer is free to rank them as C >A > B (Reference Scott and AntonssonScott & Antonsson, 1999, pp. 223–224). That is, they say that engineering reality constraints engineering judgment. This is a correspondence argument. Franssen (2005, pp. 48–49) replies with a textbook case of a coherence argument: “… it is of paramount importance to realize that preference is a mental concept and is neither logically nor causally determined by the physical characteristics of a design option.”

3.4 Utility theory aims at coherence

Utility theory is a mathematical theory that reigns in economics (Reference StarmerStarmer, 2000), operations research (Reference Keeney and RaiffaKeeney & Raiffa, 1976; Reference Keeney and Raiffa1993), and decision analysis (Reference HowardHoward, 1968). Mathematical theories proceed from axioms to theorems. As such, utility theory aims at achieving coherence.

Does utility theory aim at achieving correspondence as well? There is a sense in which the answer seems to be yes. Keeney and Raiffa (Reference Keeney and Raiffa1976; Reference Keeney and Raiffa1993) advocate its use because they believe that utility theory will lead to success in real-world decision problems. Why would this be true? The answer implicit in the design literature — at least among proponents of multi-attribute utility theory — is that achieving coherence always implies achieving correspondence. For example, Thurston (2001, p. 176) writes: “Unaided human decision-making often exhibits inconsistencies, irrationality, and suboptimal choices …. To remedy these problems, decision theory is built on a set of ‘axioms of rational behavior.’ ” Here, lack of coherence (inconsistencies) is mentioned in the same sentence with lack of correspondence (suboptimal choices) as if to express that the two are conceptually very close to each other. In fact, the second sentence directly suggests that lack of correspondence will be remedied by coherence.

My opinion is that, even if the proponents of multi-attribute utility theory believe that it aims at achieving correspondence, the only thing we know for sure is that utility theory aims at achieving coherence. Given the premise that coherence always implies correspondence, we would conclude that utility theory aims at achieving correspondence, but the truth of the premise is an open empirical question.

In the next and final section of the paper, I discuss what we know, from both engineering design and JDM research, about whether achieving coherence always implies achieving correspondence.

4 Does achieving coherence always imply achieving correspondence? Two counterexamples from engineering design and JDM research

I first argue that, in design, it is important to know if coherence implies correspondence. The reason is that while in design the objective of decision-making is correspondence, it is difficult to assess it, and coherence that is easier to assess is used as a surrogate. For the surrogate (coherence) to be useful for inferring the objective (correspondence), the relationship between the two must be known.

The previous argument rests on two claims: First, correspondence is the objective of design decision-making. Second, correspondence is difficult to assess in design decision-making. I argue for these two claims.

First, it is in a sense obvious that the only thing that ultimately matters in engineering is “how well the design works.” It is not acceptable to argue coherently but choose a design that is not functional. The objective is to achieve correspondence.Footnote 6

To argue for the second claim that correspondence is difficult to assess in design, I start with a frequent observation in the JDM literature. While coherence can be assessed during the process of making a judgment or a decision, correspondence can be assessed only after the outcome of the judgment or decision has been observed (Connoly, Arkes, & Hammond, Reference Connolly, Arkes and Hammond2000). With respect to coherence, this observation seems to hold in engineering design as well. The situation is somewhat different with respect to correspondence.

Whether a design satisfies a functional requirement or not may be assessed both after and before a design is chosen. Crucially, assessing whether a functional requirement is satisfied is often difficult or even impossible. Extensive experimentation is needed in order to establish that a requirement, such as durability, is satisfied (Reference Frey and DymFrey & Dym, 2006). It is even harder to measure criteria of correspondence such as transparency or critical acclaim. Some correspondence criteria, such as market share, may be easier to measure but it is hard to attribute success or failure to decision-making alone. For example, marketing may greatly influence the sales of an artifact.

What is the precise meaning of the statement “achieving coherence implies achieving correspondence”? There are many possibilities. I propose the following:

Achieving Coherence Implies Achieving Correspondence. “For a decision problem, achieving coherence implies achieving correspondence if whenever there are two methods A and B such that A satisfies a criterion of coherence and B violates this criterion, it holds that A scores higher in all criteria of correspondence than B.”

I argue that there exist counterexamples to the claim that achieving coherence implies achieving correspondence for all decision problems. Don Clausing and I made this point for engineering design.

After the golden post-World-War-II era, the American manufacturing industry began to lose ground in the 1970s, in particular compared to Japan. By the 1980s, the crisis was so obvious that investigations were undertaken. The report of the MIT Commission On Industrial Productivity (1989) summarizes some results. Based on the report, Reference Clausing and KatsikopoulosClausing and Katsikopoulos (2008) argue that methods such as the Pugh process lead to higher quality designs that are produced with less cost and are delivered more quickly than the designs chosen by methods such as weighting-and-rating and utility theory. Recall that the Pugh process violates coherence criteria such as the independence of irrelevant alternatives while the weighting/rating method and multi-attribute utility theory satisfy it.

An explanation for this counterexample is that the Pugh process fosters creativity while other methods stifle it. Classical utility-based decision analysis (Reference von Winterfeldtvon Winterfeldt & Edwards, 1986; Reference Edwards and FasoloEdwards & Fasolo, 2001) may stifle creativity when it focuses on the analysis of the designs provided before the decision process starts, and neglects the generation of novel designs. This explanation is consistent with the results of a study by Frey et al. (2009) where a computer simulation was used to assess the profitability achieved by different decision methods. It was found that the Pugh process outperformed the rating/weighting method when creativity was modeled as part of the design decision process, while the two methods were equally profitable when there was no creativity.

It is noteworthy that practitioners of classical decision analysis have drawn attention to creativity as well (Philips, Reference Phillips1982). In a recent paper, Ralph Keeney (2004a, p. 193), reflects on this: “[In decision analysis] more emphasis must be placed on structuring decisions worth thinking about, and less emphasis must be based on analyzing structured decisions.” He has also written specifically on how to create design alternatives (Reference KeeneyKeeney, 2004b).

The finding that coherence does not always imply correspondence should not be a surprise to JDM researchers. Some of the results of the “fast-and-frugal-heuristics” program can be interpreted as showing this as well.

Consider the paired comparison problem where it has to be judged which one of two objects has the higher value on a numerical criterion. This judgment is made based on the values of the objects on cues (that correlate, albeit imperfectly, with the criterion). For example, the two objects may be companies, the criterion may be a company’s net worth, and a cue may be whether a company is in the stock exchange.

Take one method to be multiple linear regression. Take another method to be the lexicographic heuristic take-the-best (Reference Katsikopoulos and FasoloGigerenzer & Goldstein, 1996), where cues are looked up one at a time and a decision is made based on the first cue with different values on the two objects. It is easy to see that regression satisfies transitivity and take-the-best does not. Across twenty datasets (from economics, biology, psychology; see Reference KeeneyGigerenzer et al., 1999), linear regression achieved a predictive accuracy of 68% and take-the-best achieved 71%.Footnote 7

As in design, this counterexample contradicts the rhetoric of classical decision analysis. Keeney and Raiffa (1993, p. 78) have written that lexicographic heuristics are “naively simple” and “will rarely pass a test of reasonableness.” As Gigerenzer and Goldstein (1996, p. 663) point out, “despite empirical evidence…lexicographic algorithms have often been dismissed at face value because they violate the tenets of classical rationality.” Lexicographic heuristics were dismissed because it was assumed that coherence always implied correspondence.Footnote 8

The question of under which conditions should heuristics be used for making decisions has been addressed in the JDM literature. There is agreement that “…in general, …heuristics are quite useful, but sometimes they lead to severe and systematic errors” (Reference Tversky and KahnemanTversky & Kahneman, 1974, p. 1124). But what does “sometimes” mean? In the beginning, answers were typically cast as a list of experimental manipulations that increase or decrease the accuracy of a heuristic. Further progress has been made recently by specifying precise models of heuristics and using them to analyze the performance of heuristics.

There exist now mathematical analyses of the accuracy of lexicographic heuristics such as take-the-best, and of more sophisticated methods (Reference Martignon and HoffrageMartignon & Hoffrage, 2002; Reference Magee and FreyHogarth & Karelaia, 2007; Baucells et al, Reference Baucells, Carasco and Hogarth2008). For example, a necessary and sufficient condition has been derived under which a lexicographic heuristic achieves maximum accuracy (Reference Katsikopoulos and MartignonKatsikopoulos & Martignon, 2006).

Uncovering general conditions under which achieving coherence implies achieving correspondence is a topic where engineering design and JDM research might connect. Interestingly, these conditions may involve mathematical properties as well as psychological constructs.

Footnotes

*

This work was supported by a German Science Foundation (DFG) Fellowship for Young Researchers KA 2286/4–1. I thank Jon Baron, Don Clausing, Phil Dunwoody, Jonathan Evans, Dan Frey, Robin Hogarth, Chris Magee, two anonymous reviewers, and the participants of the 2007 Brunswik Society Meeting for their comments.

1 Magee and Frey (2006) discuss a design exercise where undergraduate students had to develop a paper airplane that would fly a given distance consistently. It was observed that students seemed to use “one-reason” decision heuristics when testing and creating airplane designs (even though, as the authors acknowledge, this research did not employ controlled studies). The authors review work on fast and frugal heuristics, compare it to the thinking and reasoning that goes on in design, and conclude that “Our current belief is that engineering designers use a toolbox of fast and frugal heuristics” (Magee & Frey, 2006, p. 486).

2 Correspondence can also be measured by human-factors criteria such as transparency or usability, and by “broader” criteria such as critical acclaim or success in the market. For more examples of coherence and correspondence criteria in engineering design and on some comments on their relation, see Evans, Foster, and Katsikopoulos (in press). Computer science and engineering also provide examples of coherence and correspondence criteria. For example, computer code has to satisfy syntax requirements (coherence) and produce the “desirable” output (correspondence).

3 One of these methods is the Pugh process; see Hauser and Clausing (1988) for details.

4 It is worthy to note that even though the difficulties implied by Arrow’s theorem are theoretically possible, they are rarely realized in practice (Regenwetter et al., 2006).

5 This does not mean that the Pugh process aims at, or would even accept, incoherence in the sense of not adhering to basic mathematical truths such as “1 + 1 = 2.”

6 Note that my argument is that correspondence should have priority specifically in engineering design. I would not make this argument for all fields. For example, coherence should be the top priority in mathematics. In public policy, it has been argued that a mix of coherence and correspondence may be most appropriate (Hammond, 2007).

7 Lages, Hoffrage, and Gigerenzer (2000) provide some evidence that decision methods that produce a higher number of intransitive triples also have higher predictive accuracy if there is not a lot of missing information. Note, however, that it is not clear that the number of intransitive triples is related monotonically to the “degree” of coherence (Regenwetter et al., 2006).

8 For comments on the role that lexicographic heuristics can play in decision analysis, see Katsikopoulos and Fasolo (2006).

References

Allais, M. (1979). Foundations of a positive theory of choice involving risk, and a criticism of the postulates and axioms of the American School. In M. Allais & O. Hagen (Eds.), Expected utility hypothesis and the Allais’ Paradox, pp. 25145. Dordrecht: D. Reidel. (Original work published 1952)CrossRefGoogle Scholar
Arrow, K. J. (1950). A difficulty in the concept of social welfare. Journal of Political Economy, 58, 328346.CrossRefGoogle Scholar
Baucells, M., Carasco, J. A., & Hogarth, R. M. (2008). Cumulative dominance and heuristic performance in binary multi-attribute choice.. Operations Research, 56, 12891304.CrossRefGoogle Scholar
Clausing, D. P., & Katsikopoulos, K. V. (2008). Rationality in systems engineering: Beyond calculation or political action. Systems Engineering, 11, 309328.CrossRefGoogle Scholar
Connolly, T., Arkes, H. R., & Hammond, K. R. (2000). Judgment and decision making: An interdisciplinary reader. Cambridge, UK: Cambridge University Press.Google Scholar
Edwards, W., & Fasolo, B. (2001). Decision technology. Annual Review of Psychology, 52, 581606.CrossRefGoogle ScholarPubMed
Evans, J. R., Foster, C., & Katsikopoulos, K. V. (in press). Coherence and correspondence in engineering design evaluations. Proceedings of the Annual Meeting of the American Society for Engineering Education.Google Scholar
Franssen, M. (2005). Arrow’s theorem, multi-criteria decision problems and multi-attribute preferences in engineering design. Research in Engineering Design, 16, 4256.CrossRefGoogle Scholar
Frey, D. D., & Dym, C. L. (2006). Validation of design methods: Lessons from medicine. Research in Engineering Design, 17, 4557.CrossRefGoogle Scholar
Frey, D. D., Herder, P. M., Wijnia, Y., Subramanian, E., Katsikopoulos, K. V., & Clausing, D. P. (2009). An evaluation of the Pugh controlled convergence method. Research in Engineering Design, 20, 4158.CrossRefGoogle Scholar
Gigerenzer, G., & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review, 103, 4, 650669.CrossRefGoogle ScholarPubMed
Gigerenzer, G, Todd, P. M., the ABC research group (1999) Simple heuristics that make us smart. New York: Oxford University Press.Google Scholar
Hammond, K. R. (1996). Human judgment and social policy: Irreducible uncertainty, inevitable error, unavoidable injustice. Oxford: Oxford University Press.CrossRefGoogle Scholar
Hammond, K. R. (2007). Beyond rationality: The search for wisdom in a troubled time. Oxford: Oxford University Press.CrossRefGoogle Scholar
Hauser, J. R., & Clausing, D. P. (1988). The house of quality. Harvard Business Review, 3 (May-June), 663673.Google Scholar
Hazelrigg, G. A. (1996). The implication of Arrow’s impossibility theorem on approaches to optimal engineering design. ASME Journal of Mechanical Design, 118, 161164.CrossRefGoogle Scholar
Hazelrigg, G. A. (1999). An axiomatic framework for engineering design. ASME Journal of Mechanical Design, 121, 342347.CrossRefGoogle Scholar
Hogarth, R. M., & Karelaia, N. (2007). Heuristic and linear models of judgment: Matching rules and environments. Psychological Review, 114, 733758.CrossRefGoogle ScholarPubMed
Howard, R. A. (1968). The foundations of decision analysis. IEEE Transactions on Systems Science and Cybernetics, 4, 211219.CrossRefGoogle Scholar
Katsikopoulos, K. V., & Gigerenzer, G. (2008). One-reason decision-making: Modeling violations of expected utility theory. Journal of Risk and Uncertainty, 37, 3556.CrossRefGoogle Scholar
Katsikopoulos, K. V., & Fasolo, B. (2006). New tools for decision analysts. IEEE Transactions on Systems, Man, and Cybernetics: Systems and Humans, 36, 960967.CrossRefGoogle Scholar
Katsikopoulos, K. V., & Martignon, L. (2006). Naive heuristics for paired comparisons: Some results on their relative accuracy. Journal of Mathematical Psychology, 50, 488494.CrossRefGoogle Scholar
Keeney, R. L. (2004a). Making better decision makers. Decision Analysis, 1, 193204.CrossRefGoogle Scholar
Keeney, R. L. (2004b). Stimulating creative design alternatives using customer values. IEEE Transactions on Systems, Man, and Cybernetics: Systems and Humans, 34, 450459.CrossRefGoogle Scholar
Keeney, R. L., & Raiffa, H. (1976). Decisions with multiple objectives: Preferences and value tradeoffs. New York: Wiley.Google Scholar
Keeney, R. L., & Raiffa, H. (1993). Decisions with multiple objectives: Preferences and value Tradeoffs, Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Lages, M., Hoffrage, U., & Gigerenzer, G. (2000). How heuristics produce intransitivities and how intransitivities can discriminate between heuristics. Manuscript, Max Planck Institute for Human Development, Berlin, Germany.Google Scholar
Luce, R. D., & von Winterfeldt, D. (1994). What common ground exists for descriptive, prescriptive, and normative utility theories? Management Science. 40, 263279.CrossRefGoogle Scholar
Magee, C., & Frey, D. D. (2006). Experimentation and its role in engineering design: Linking a student design exercise with new results from cognitive psychology. International Journal of Engineering Education, 22, 478488.Google Scholar
Martignon, L., & Hoffrage, U. (2002). Fast, frugal, and fit: Simple heuristics for paired comparison. Theory and Decision, 52, 2971.CrossRefGoogle Scholar
Productivity, MIT Commission on Industrial (1989). Made in America — Regaining the competitive edge. Cambridge: MIT Press.Google Scholar
Phillips, L. D. (1982). Requisite decision modeling: A case study. Journal of Operational Research Society, 33, 303311.CrossRefGoogle Scholar
Pugh, S. (1981). Concept selection: A method that works. Proceedings of the International Conference on Engineering Design, Rome: ASME Press.Google Scholar
Pugh, S. (1990). Total design: integrated methods for successful product engineering. Workingham, UK: Addison-Wesley.Google Scholar
Regenwetter, M., Grofman, B., Marley, A., & Tsetlin, I. (2006). Behavioral social choice. Cambridge: Cambridge University Press.Google Scholar
Saari, D. G., & Sieberg, K. K. (2004). Are partwise comparisons reliable? Research in Engineering Design, 15, 6271.CrossRefGoogle Scholar
Saaty, T. (1980). Analytic Hierarchy Process, New York: Mc-Graw Hill.Google Scholar
Scott, M. J., & Antonsson, E. K. (1999). Arrow’s theorem and engineering design decision making. Research in Engineering Design, 11, 218228.CrossRefGoogle Scholar
Starmer, C. (2000). Developments in non-expected utility theory: The hunt for a descriptive theory of choice under risk.” Journal of Economic Literature 38(2), 332382.CrossRefGoogle Scholar
Thurston, D. L. (1991). A formal method for subjective design evaluation with multiple objectives. Research in Engineering Design, 3, 105122.CrossRefGoogle Scholar
Thurston, D. L. (2001). Real and misconceived limitations to decision based design with utility analysis. ASME Journal of Mechanical Design, 123, 176182.CrossRefGoogle Scholar
Thurston, D. L. (2006). Utility function fundamentals, In K. E. Lewis, W. Chen, & L. C. Schmidt (Eds.) Decision Making in Engineering Design (pp. 1519). New York: ASME Press.Google Scholar
Tversky, A., & Kahneman, D. (1974). Heuristics and biases: Judgement under uncertainty. Science, 185, 11241130.CrossRefGoogle ScholarPubMed
von Winterfeldt, D., & Edwards, W. (1986). Decision analysis and behavioral research. New York: Cambridge University Press.Google Scholar