1. INTRODUCTION
1.1. Goal of the Special Issue
This Special Issue provides a justification and a proposed research direction for establishing a common benchmarking scheme for function representations that are developed and deployed throughout academia and practice with the ultimate goal of providing industry with practically usable function modeling tools and concepts. Earlier work on function benchmarking was presented at the International Conference on Engineering Design in 2013 (Summers et al., Reference Summers, Eckert and Goel2013) and revisited in a companion paper of this Special Issue (Summers et al., Reference Summers, Eckert and Goel2017). Despite decades of research into function descriptions (Eastman, Reference Eastman1969; Freeman & Newell, Reference Freeman and Newell1971; Rodenacker, Reference Rodenacker1971; Collins et al., Reference Collins, Hagan and Bratt1976; Sembugamoorthy & Chandrasekaran, Reference Sembugamoorthy, Chandrasekaran, Kolodner and Riesbeck1986; Andreasen & Hein, Reference Andreasen and Hein1987; Hubka & Eder, Reference Hubka and Eder1988; Ullman et al., Reference Ullman, Dietterich and Stauffer1988; Vescovi et al., Reference Vescovi, Iwasaki, Fikes and Chandrasekaran1993; Sasajima et al., Reference Sasajima, Kitamura, Ikeda and Mizoguchi1995; Bracewell & Sharpe, Reference Bracewell and Sharpe1996; Qian & Gero, Reference Qian and Gero1996; Umeda et al., Reference Umeda, Ishii, Yoshioka, Shimomura and Tomiyama1996; Goel, Reference Goel1997; Kirschman & Fadel, Reference Kirschman and Fadel1998; Gero & Kannengiesser, Reference Gero, Kannengiesser and Gero2002; Hirtz et al., Reference Hirtz, Stone, McAdams, Szykman and Wood2002; Chandrasekaran, Reference Chandrasekaran2005; Albers et al., Reference Albers, Thau, Alink and Horvath2008; Erden et al., Reference Erden, Komoto, van Beek, D'Amelio, Echavarria and Tomiyama2008; Yang et al., Reference Yang, Patil and Dutta2010; Linz, Reference Linz2011; Sen et al., Reference Sen, Summers and Mocko2011; Srinivasan et al., Reference Srinivasan, Chakrabarti and Lindemann2012; Pahl et al., Reference Pahl, Beitz, Blessing, Feldhusen, Grote and Wallace2013; Schultz et al., 2014), industry has not appeared to have practiced function modeling while professing a need to express product information beyond form (Eckert, Reference Eckert2013; Arlitt et al., Reference Arlitt, Stone, Tumer, Chakrabarti and Lindemann2016). A possible reason contributing to industry's resistance might be that there is no canonical definition of function, with various approaches to function modeling being grounded in different conceptualizations. Research efforts have resulted in several distinct views of function in engineering design (e.g., Deng, Reference Deng2002; Goel et al., Reference Goel, Rugaber and Vattam2009; Crilly, Reference Crilly2010; Eckert, Reference Eckert2013; Vermaas, Reference Vermaas2013). These perspectives have been formalized into different modeling approaches. For example, several design textbooks talk about using function-flow networks to capture the sequence and dependencies for the desired function of a product or system (Shishko & Aster, Reference Shishko and Aster1995; Ulrich & Eppinger, Reference Ulrich and Eppinger2008; Ullman, Reference Ullman2010; Buede, Reference Buede and Sage2011; Haskins & Forsberg, Reference Haskins and Forsberg2011; Pahl et al., Reference Pahl, Beitz, Blessing, Feldhusen, Grote and Wallace2013).
A preceding Special Issue in this journal in 2013 edited by Pieter Vermaas and Claudia Eckert asked for research papers concerning how and for what purpose function models could be applied, based on position papers covering different notions of function (Vermaas, Reference Vermaas2013), a discussion of engineers working with different notions of function in practice (Eckert, Reference Eckert2013), and the evolution of an approach to function–structure–behavior over decades (Goel, Reference Goel2013). The editorial concluded that there is still a culture that “… my function model is better than yours!”, which ignores that this ambiguity about what function modeling is and how it is done is in itself a barrier to widespread adoption and use of function models and descriptions (Vermaas & Eckert, Reference Vermaas and Eckert2013). One of the reasons for the plethora of different approaches is that different researchers are working on different aspects of the function modeling problem, at different scales, with the goal to support different types of reasoning, in different industry sectors, and most with varying research goals.
In response to the disconnect among those researching functions, we assert that each approach has its own strengths and weaknesses, and each may be well suited to specific domains. Rather than developing a single, unified definition of function, we aim to foster a discussion on the usefulness and applicability for different reasoning applications and domains. Therefore, we are proposing a different approach to function research by developing a set of comparative benchmarks that can be explored with the different modeling approaches. By utilizing benchmark problems, the community can start to discern which approaches are more useful for different needs, and perhaps to discover which elements of the representations and vocabularies are most conducive for different elements of function thinking.
Benchmarking is used in other fields routinely to enable comparative insights, but it has not been used previously in research on engineering design. Therefore, this Special Issue is also an exploration of benchmarking and related techniques to understand the strengths and weaknesses of different modeling approaches or representations. For example, we could see different process modeling approaches or creativity methods being benchmarked. To assist the contributors of this Special Issue to explore and advance the possibility of benchmarking function modeling approaches, two texts were distributed: a brief sketch of benchmarking written by Pieter Vermaas and an abridged version of the paper (Summers et al., Reference Summers, Eckert and Goel2017), which was attended by of many of the contributors. The next section contains this sketch of benchmarking.
2. DISCUSSION ON BENCHMARKING
This section, authored by Pieter Vermaas, starts by introducing two different forms of function model benchmarking approaches. The first form aims at improving a specific function modeling approach by analyzing other approaches. The second form aims at comparing function modeling approaches used for similar tasks. Next, a precondition is analyzed that is relevant for specifically the second form of benchmarking. The precondition is that function modeling approaches can be categorized by classes of approach that are similar. The final section considers benchmarking problems for function modeling approaches and their role in the two forms of benchmarking.
2.1. Two forms of benchmarking function modeling approaches
When taking for a moment distance from function modeling and focusing on benchmarking in general one can distinguish two main forms of benchmarking (for a richer and more detailed discussion, see Stapenhurst, Reference Stapenhurst2009). In the first form, producers of a product analyze other products for determining how they can improve their own product. This may be seen as producer-driven benchmarking. In the second form of benchmarking, users analyze a set of similar products for comparing them. Here, this shall be called user-driven benchmarking.
In producer-driven benchmarking, it is the producers of the product who have an active role. They decide to evaluate and improve their product, decide which aspect of the product should be evaluated and improved, and decide what other products are to be analyzed for the evaluation and improvement. Moreover, producer-driven benchmarking is primarily serving the interests of the producer. The product to be evaluated and improved is not necessarily compared with similar, rival products (say, for improving seating procedure in a theater, one can compare the theater with a plane), the outcomes of the comparison are not meant for or made public to users of the product, and if all goes well, the producer benefits by acquiring the means to improve its product.
In user-driven benchmarking it is the users of a product, or a representative of the users, who have an active role. The users decide to evaluate the product in comparison to a set of similar products, and the users decide which aspects of the product are included in the comparison. User-driven benchmarking serves primarily the interests of the users. The products are compared with rival products (say, a set of mobile phones are compared), and the outcomes of the comparison are made public to the users such that these users can determine which of the compared products serve their interests best. The producers of a compared product have a passive role of providing their product, and may hope that things go well, and that their product fares well in the comparison. However, user-driven benchmarking may also serve the interests of producers in the long run, as it informs producers what aspects users value in products.
For function modeling approaches, the producers are the modelers of functions in design research and the users are taken to be industry. Producer-driving benchmarking of function modeling approaches may therefore be called modeler-driven benchmarking, and user-driving benchmarking may be called industry-driven benchmarking.
Modeler-driven benchmarking thus means that modelers improve some aspect A of their function modeling approach M by analyzing other approaches M′, M″, … . Modeler-driven benchmarking involves evaluation for it implies determining how the other function modeling approaches score well on the aspect A. Yet this evaluation is not meant as judgmental; the other function modeling approaches M′, M″, … , that are evaluated may not even be meant for supporting the task for which the approach M is meant to support. For instance, M can be meant for ideation in conceptual design, whereas M′ is for reverse engineering.
Industry-driven benchmarking is, in contrast, judgmental. It involves comparing a series of function modeling approaches M, M′, M″, … that support the same engineering task by measuring them against a number of aspects A, A′, A″, … that industry values in using function modeling approaches for the task. The outcomes of the comparison are then used by industry to select the approaches that best serve the task.
2.2. Categories of function modeling approaches
User-driven benchmarking has, as said, the goal of comparing a set of similar products on various aspects relevant to users of the product. A precondition to this form of benchmarking is therefore that products can be categorized in classes that are similar, where similarity may mean a variety of things with more or with less specificity. For instance, users can be interested in benchmarking products that realize a broad goal, as, for instance, traveling from Clemson, South Carolina in the United States, to Milton Keynes in the United Kingdom. Multiple different combinations of trips with a variety of means as planes, trains, cars, and boats may then be categorized as similar and surface as such in the comparison. Or users can be interested in a more specific goal, as a laptop with particular technical characteristics and with a particular prize, in which case only a few products make up the category of similar products that are compared.
In the case of function modeling approaches, this categorization warrants attention as it is by far clear whether these approaches can be taken as similar. Design research has created many function modeling approaches (see the 2013 AI EDAM Special Issue on function modeling; Vermaas & Eckert, Reference Vermaas and Eckert2013). This variety may be understood as preliminary to the stage that design researchers find consensus about the best or most tenable approach. Unconditionally, benchmarking the current approaches (in both forms of benchmarking) may then be seen as speeding up the process to find this ultimate function modeling approach. An alternative understanding is that the variety of function modeling approaches is due to the different tasks for which function modeling is used, as, say, supporting ideation in conceptual design, supporting archiving of existing products, or enabling incremental changes in electromechanical engineering (e.g., Vermaas, Reference Vermaas2013). On this second understanding, specific industry-driven benchmarking should take into account that function modeling approaches can only be taken as similar if they are meant to support the same engineering task.
Ignoring this task dependency of function modeling can lead to unnecessary negative judgments. Consider, for instance, a function modeling approach M that is developed for the task of supporting incremental changes in electromechanical products, and does a good job for this task. Industry-driven benchmarking of function modeling approaches for task T can then reveal that M scores good on all aspects A, A′, … that are relevant to task T. This function modeling approach M may now also be of use for another task T′ as, say, supporting failure mode analysis in products. If now this additional use is presented as proof that M is a versatile approach that has also T′ as its goal, then M can also be included in industry-driven benchmarking of function modeling approaches for task T′. This second industry-driven benchmarking effort judges M on other aspects A‴, A‴′, … relevant to T′, and M may now end up as a relative mediocre function modeling approach.
For modeler-driven benchmarking, determining the tasks for which the function modeling approaches are meant is less necessary, although informative. When modelers want to improve their function modeling approach on aspect A, they should look at function modeling approaches that are doing well on that aspect, and knowing the tasks for which other approaches are meant may provide information on which function modeling approaches are doing well on aspect A.
When function modeling approaches are simply characterized by means of a number of features, the tasks T, T′, T″, … for which the approaches are meant and the aspects A, A′, A″, … by which they are evaluated are not specified. The distinction between modeler-driven and industry-driven benchmarking is then suppressed, making the characterization somewhat ambiguous. In Summers et al. (Reference Summers, Eckert and Goel2017) more than 20 dimensions are introduced for characterizing function modeling approaches. These dimensions include, for instance,
-
• scope of an approach: the domain for which the approach is intended;
-
• flexibility: the ability to modify and adapt the representation of functions by an approach to address new problems;
-
• closeness of mapping: the modeling conventions that need to be learned to apply the approach and how intuitive the resulting models are;
-
• error-proneness: whether the notation used in an approach induces “careless mistakes”;
-
• interpretability: how consistent and precise the interpretation of the function models is across different individuals, domain, and expertise; and
-
• change propagation: whether the representation of functions supports discovery about the effects of perturbations in a system.
A characterization of function modeling approaches along these dimensions can however be turned into modeler-driven or industry-driven benchmarking. For modeler-driven benchmarking, the characterization of function modeling approaches along the various dimensions gives modelers information about which approaches to analyze for improving their own function modeling approach on a specific aspect A. If, for instance, a modeler is interested in reducing the error proneness of his approach, the characterization gives rapid information about which other function modeling approaches score low on this aspect/dimension. For industry-driven benchmarking, something similar can be done by taking some of the dimensions as fixing the task T that singles out the category of function modeling approaches that are compared, and by taking other dimensions as the aspects A, A′, … that drive the comparison. If, for instance, this industry-driven benchmarking concerns function modeling approaches for supporting the analysis of changes in product-services systems, then the characterizations by scope and change propagation fix the approaches that are compared. In addition, if an aspect A on which the approaches are compared is consistency among the function modelers, the characterization along the interpretability dimension determines the judgment of which approach is the best.
2.3. Benchmarking problems for function modeling approaches
Setting a benchmarking problem for function modeling approaches also introduces ambiguity between modeler-driven and industry-driven benchmarking. In general, benchmarking problems can be defined by producers for making the differences explicit between their products and for creating a threshold the producers want to pass. Competitions between solar-propelled cars count as such challenges, and then the goal of the producers is to create cars that can do it and that can do it better than others. In this Special Issue, reverse engineering a glue gun is the benchmarking problem for function modeling approaches, and when taking it as modeler-driven benchmarking, taking up this problem means attempting to get the function structure of the glue gun right and show others how it can be done with different approaches.
Benchmarking problems can also be set by users, with examples including competitions between producers to win a contract. Proposals by the producers are then evaluated by the users for judging which proposal is best and who gets the contract. From this perspective, the challenge of reverse engineering a glue gun becomes industry-driven benchmarking for showing to industry which approach is best in capturing the function structure of the glue gun.
3. EXPLANATION OF THE THREE THEMES
In this Special Issue, the glue gun challenge is to be seen as a modeler-driven benchmarking problem for developing within design research the language and practice of comparing function modeling approaches. To this end, we invited special contributions in three specific areas:
-
1. papers that present a function model created within the author's representation of choice, applied against the glue gun example challenge problem, and a detailed critique of the approach explaining its capabilities and limitations using the function model(s) for the problem. These are used to demonstrate how a single benchmark problem can be used to compare multiple different modeling approaches.
-
2. papers that present a suite of benchmark challenge problems. To this end, papers that illustrate design problems for function modeling that can be used to compare function modeling approaches were sought. The problems should be fully detailed in terms of scope, size, and domain, and clearly illustrate the criteria of comparing modeling approaches for which this problem can be used as a benchmark.
-
3. papers presenting empirical studies comparing performance of multiple function modeling approaches with respect to select benchmark dimensions of the authors' choice. This might include studies comparing the performance of two approaches to support: ease of modeling, human interpretability of models, teachability of modeling approaches, ability to support innovative ideation, physics-based reasoning using the models, or any other dimension(s) of authors' choice.
Many papers were received, reviewed, and evaluated for appropriateness for inclusion in this Special Issue. The selected papers presented as a collection for this Special Issue are primarily addressing the first theme in which researchers applied their models against a common benchmark product, the glue gun. While the goal of the Special Issue was also to include proposals for new benchmark challenge problems, the community did not respond with offers of problems. This might suggest that our research community is still evolving in thinking about the research challenges from a more coordinated and distributed point of view. For the editors, this suggests an opportunity to address the gap in the literature through creative and innovative means in the future. Finally, a few papers were received that presented findings for direct comparisons between different models. Again, this suggests that the community has not yet reached a maturity level where peer function modeling approaches are understood well enough to be directly benchmarked/compared against each other.
3.1. Theme 1: Model demonstration with glue gun
Unal Yildrim, Felician Campean, and Huw Williams have developed system state flow diagram (SSFD), a framework that can assist modeling solution-neutral functions of multidisciplinary systems at various levels of complexity (Yildirim et al., Reference Yildirim, Campean and Williams2017). This framework is intended to support designing, modeling, and analysis of products and systems. The SSFD originates from fault analysis in automotive engineering. The analysis starts with the definition of input and output states of the operand, conceptualized as an object, in terms of the measurable attributes or properties that describe the states. The function is defined in relation to the transformation needed to change the values of attributes from the initial input to the final output state. The SSFD model is developed by decomposing the function through identification of intermediate states of the flow between the input state and the output state. The function model of a product or system is represented as a chain of state transitions, including the transitions in the main flow, connecting flows, and branching flows. Further, to this function model, conditional fork node heuristics are added to describe the distinct, multiple modes of operation corresponding to various use cases in a complex, multidisciplinary system. While SSFD has been applied successfully in automotive companies as it supports modeling across multiple domains, this paper presents a rigorous academic basis and guidelines to the application of the method. Similar to work proposed by others (Otto & Wood, Reference Otto and Wood2001), the SSFD offers guidelines for how to construct flow models. The paper sets SSFD in the context of other function modeling approaches. Like other function modeling approaches, the SSFD supports abstract top-down decomposition, but it also allows modeling multiple modes of operations that are adopted in a complex system over its lifecycle through branching points in the model that describe different modes of operation. The SSFD framework has been used in industry, and its several features are illustrated by applying it to develop function models of a glue gun and the powertrain of an electric vehicle.
The paper by Kilian Gericke and Boris Eisenbart, titled “The Integrated Function Modeling Framework and Its Relation to Function Structures,” proposes a novel approach to function modeling called integrated function modeling (IFM) framework, which combines multiple viewpoint on functions in a single model (Gericke & Eisenbart, Reference Gericke and Eisenbart2017). Building on work by (Vermaas, Reference Vermaas2009, Reference Vermaas2013; Vermaas & Eckert, Reference Vermaas and Eckert2013) and others, the IFM incorporates a behavior-related notion, an outcome-related notion, and task or goal-related notion of function into a single model, as the authors see an inherent lack of function modeling approaches to provide guidance in linking between different contents and viewpoints in particularly across design disciplines such as mechanical engineering, electrical engineering, and software, which have to come together in most complex contemporary systems. The IFM uses a design structure matrix to combine a state view, a use case view, an actor view, an effect view, and an interaction view centered on a process flow view, which presents a view of the qualitative flow of different types of processes and represents a behavioral view of the product showing causal link between transformations. It assumes that a team would select the views that are beneficial to their specific tasks rather than always work with a comprehensive model. The approach starts with a hierarchical decomposition of the overall function, the main functions, and the auxiliary functions and the assumption they incorporate using an abstract verb–noun representation stating its inputs and outputs. These are combined into a final model, which breaks the function steps down as transformations of energy, matter, and information. The paper shows the different views for the glue gun as well as the resulting combined matrix. The IFM models are compared to the function structures approach (Pahl et al., Reference Pahl, Beitz, Blessing, Feldhusen, Grote and Wallace2013). The authors argue that the two modeling approaches complement each other, but that IFM provides a richer and, therefore, potentially more useful representation as it centers multiple representations around a function model.
In another offering, “Introduction to Quantitative Engineering Design Methods Via Controls Engineering,” Briana M. Lucero, Matthew J. Adams, and Cameron J. Turner first observe that function models of electromechanical products commonly practiced and taught in design education, such as those stored in the Oregon State Design Repository, do not include satisfactory modeling protocol for signal flows, despite signal being one of the three major flow types in function literature alongside material and energy (Lucero et al., Reference Lucero, Adams and Turner2017). The authors suspect that this gap could be the result of a lack of formalism for modeling signals as nonconserved flows that are carried by material or energy flows. To address this gap, the authors further observe that currently existing formalism of modeling control systems as chains of blocks and arrows already provide sufficient formalism to address this gap. The authors then propose a formalism based on controls theory, using four similarities between controls engineering and functions, such as schematic similarity, similarity of control variables with nondimensional flows in function models, similarity of the differential equations of transfer functions with the bond graph representation of functions, and isomorphic matching. The authors then apply these ideas to three design models, including the benchmark model of a glue gun, to illustrate their approach. The paper demonstrates that the key performance parameters of a mechanical system could be computed through function modeling using dimensional analysis techniques, such as Buckingham–Pi. It also shows that the functions in the function basis vocabulary could be modeled as transfer functions of control systems, using bond graphs, because the five basic elements of bond graphs (resistive, capacitive, inductive, transformer, and gyrator) are analogous to basic mechanical functions.
Next, Hossein Mokhtarian, Eric Coatanéa, and Henri Paris, in “Function Modeling Combined With Physics-Based Reasoning for Assessing Design Options Supporting Innovative Ideation,” present the dimensional analysis conceptual modeling framework of function modeling, which is an approach to use a physics-based representation of functions that combines dimensional analysis, bond graphs, cause and effect, and a TRIZ-like representation (Hossein et al., Reference Mokhtarian, Coatanea and Henri2017). The framework is shown to facilitate physics-based reasoning, the exploration of design options, and generating ideas for design variants, within the context of reverse engineering or incremental design. The dimensional analysis conceptual modeling framework is used through eight steps. These steps are system and boundary definition, function modeling using the bond graph vocabulary as functions, identifying the variable list, assigning variables to the function model, applying causal reasoning rules to the function model, generating the causal model/graph, computing the behavioral laws of the model, and using the model for analysis and design reasoning. The glue gun example is used to illustrate the ideas of the paper. The method can detect TRIZ-like contradictions such as the simultaneous need to both increase and decrease the glue stick diameter in order to maximize glue flow rate.
3.2. Theme 2: Exemplar problems
While this theme was presented for the potential authors in this edition, no papers were received that specifically addressed this topic.
3.3. Theme 3: Comparative studies
In “Transforming Function Models to Critical Chain Models Via Expert Knowledge and Automatic Parsing Rules for Design Analogy Identification,” Malena Agyemang, Julie Linsey, and Cameron J. Turner seek to determine if pruning rules are a viable method to transform a complex function model into a model that only illustrates critical functions and critical flows (Agyemang et al., Reference Agyemang, Turner and Linsey2017). The authors use as a benchmark a set of expertly (manually) derived function models and compare those to models derived using pruning or parsing rules. Finally, the authors use the manually and automatically generated critical chain models as input to a design analogy system. Their work shows promise that pruning rules are approaching a capability of replacing the daunting task of manually creating critical chain function models.
Next, Margherita Peruzzini, Roberto Raffaeli, Marco Malatesta, and Michele Germani, in “Toward a Function-Based IT Platform for Variant Redesign of Household Appliances,” demonstrate how function and other product models can be leveraged to support innovation (Peruzzini et al., Reference Peruzzini, Raffaeli, Malatesta and Germani2017). Specifically, they present an approach for using function in the generation of new product variants. The authors' work goes beyond current function-based concept generation approaches by adding several layers of models and interactions, specifically modular and structural levels. Most notably the authors use a rule-based component configuration system to help assemble the “new” design. Finally, the authors present a case study done in partnership with Electrolux to demonstrate the systems capabilities by designing a new kitchen range variant.
Unlike the previous two papers, in their paper “A Bridge to Systems Thinking in Engineering Design: An Examination of Students' Ability to Identify Functions at Varying Levels of Abstraction,” Megan Tomko, Jacob Nelson, Robert Nagel, Matthew Bohm, and Julie Linsey demonstrate how different function modeling approaches impact student learning and their ability to think in terms of systems (Tomko et al., Reference Tomko, Nelson, Nagel, Bohm and Linsey2017). To test this empirically, two groups of students, constituting the modeling and the enumerating groups, are asked to generate functions for different products and their responses compared using the following criteria: correctness and abstraction levels of functions. Prior to this experiment, the students in the modeling group are taught systems abstraction, function enumeration, and function modeling, but the students in the enumeration group are only taught systems abstraction and function enumeration. The correctness of functions is categorized into correct, partially correct, and incorrect functions. Correct and partially correct functions are further categorized into: high level, low level, interface, and ambiguous. Tomko et al. observed that the students in the modeling group generated more low-level, interface, and ambiguous functions, but lesser high-level functions than the students in the enumerating group. In addition, the students in the modeling group also generated less incorrect functions than the students in the enumerating group. These results signify that the students in the modeling group can comprehend functions better at various levels of abstraction and, therefore, have better holistic systems thinking ability than the students in the enumerating group.
Finally, an approach to comparing the inferencing capabilities of function representations is presented in “Comparing Function Structures and Pruned Function Structures for Market Price Prediction: An Approach to Benchmarking Representation Inferencing Value,” by Amaninder Singh Gill, Joshua D. Summers, and Cameron J. Turner. In this comparison paper (Singh Gill et al., Reference Gill, Summers and Turner2017), several different representations of function models generated using different grammar and vocabulary restrictions are used to predict the market price for test products. This approach to evaluating the value or benefit of a representation to draw inferences is one approach that can be used to compare different representations. It was found that the unpruned representations were able to more accurately predict the market prices, while previous work had found that the pruned representations were able to support human interpretation better (Caldwell et al., Reference Caldwell, Thomas, Sen, Mocko and Summers2012). Where others directly compared representations with respect to student learning, concept generation, or transformation, this approach for benchmarking focused on quantitatively measuring the reasoning support of a representation.
4. THE NEXT STEPS
This Special Issue contains four papers with models of the glue gun. The next logical step is to analyze the strengths and weakness of the models and provide the results of the benchmarking exercise. The editors of the Special Issue are planning to engage in this as a next step based on the final papers in this Special Issue and intend to submit a follow-on stand-alone article summarizing the findings. In the spirit of benchmarking, we also invite others to create their own comparisons and benchmarking problems and protocols. The benchmarking exercise will have two distinct audiences: common-sense suggestions for practitioners and a theoretical reflection over the merits of benchmarking for the academic community.
This Special Issue has shown there is still a lively interest in function modeling in the engineering design research community as a new generation of authors has embraced the issue. While they have made huge strides to engage with the work of previous generations, we have also seen that older work has been somewhat ignored and the old questions, such as “should function be solution neutral” are nowhere near to being resolved. With the exception of Yildirim et al. (Reference Yildirim, Campean and Williams2017), in general the papers have again started from a theoretical perspective rather than embrace the challenges that industry is facing. However Gericke and Eisenbart (Reference Gericke and Eisenbart2017) and Yildirim et al. (Reference Yildirim, Campean and Williams2017) have been embracing the challenges of using function modeling to bridge across the different disciplines, as products become more complex and initiatives like Industry 4.0 or the Internet of Things push companies to bring hardware, electronics, and software closer together. We can still look forward to decades of interesting research on functions.
The editors collectively agree that while this issue shows great strides in the applicability and usefulness of function modeling, a majority of the papers submitted are an extension of the authors' previous works. However, the call for benchmarking has led to a more thorough validation and illustration of the modeling approaches and yield a set of models of the same object: the glue gun. The papers presented here do address the original calls of the Special Issue, but none particularly address the issue of benchmarking problems. Thus, the editors believe that more discussion regarding benchmarking as well as model validation and verification must occur. The formal and mathematically rigorous approach of classifying formal languages in computing theory, such as the Chomsky hierarchy, could be used as a reference point for this discussion and to illustrate an equivalent gap in function research. It is because of this formalism that all computing problems within a given problem class of the hierarchy, such as the context-sensitive languages or recursively enumerable problems, could be shown to be computationally equivalent to each other, and a newly described problem could be formally classified within the hierarchy. As a result, newly proposed algorithms could be “tested” against these classes to evaluate their “goodness.” For example, the traveling salesman problem is often used as a representative of the class of NP-complete problems and as a test bed for novel algorithms that attempt to address that class. While classifying design problems is, by nature, a different type of challenge than classifying computing problems, this comparison goes to show that function research in engineering design does still not have a metric or a yardstick to describe how well a particular solution approach lends itself to a particular problem. Computer scientists can easily evaluate the effectiveness of their algorithms by assessing run-time/complexity/Big-O; however, those who investigate function still lack the basic assessment or benchmarking techniques to evaluate the effectiveness of their approaches in specific domains. The editors plan to further develop a suite of benchmarking problems and ask that the engineering design community also contribute to this cause. The long-term goal is to introduce benchmarking to the cannon of methods used regularly in engineering design to present tools, methods, and modeling approaches together with a description of their scope and the area of application to which they are most useful.
Matthew Bohm is an Assistant Professor at Florida Polytechnic University.
Claudia Eckert is a Professor of design at Open University. She has a longstanding interest in studying and supporting industrial practice in different design domains and has published numerous papers on it. In particular, she has been working on process modeling, engineering change, and functional modeling of complex engineering products.
Chiradeep Sen is an Assistant Professor at Florida Institute of Technology.
Venkatamaran Srinivasan is a Research Associate at Singapore University of Technology and Design.
Joshua D. Summers is a Professor of mechanical engineering at Clemson University, where he is also the Co-Director of the Clemson Engineering Design Applications and Research Group. He earned his PhD in mechanical engineering from Arizona State University and his MS and BS the University of Missouri. Dr. Summers worked at the Naval Research Laboratory (VR Lab and NCARAI). His research has been funded by government, large industry, and small- to medium-sized enterprises. Joshua's areas of interest include collaborative design, knowledge management, and design enabler development with the overall objective of improving design through collaboration and computation.
Pieter Vermaas is a Professor at the Technical University of Delft.