Throughout its history, organizational psychology research has recognized and attempted to explain the complexity of organizational phenomena (e.g., Dooley, Reference Dooley, Guastello, Koopmans and Pincus2008; Guastello & Liebovitch, Reference Guastello, Liebovitch, Guastello, Koopmans and Pincus2008). Specifically, the underlying assumption is that psychosocial events cannot be exclusively traced back to a single isolated element, but rather to configurations of variables linked by complex relationships of interdependence (Crilly, Reference Crilly, Fiss, Cambré and Marx2013; Fiss, Reference Fiss2011; Ketchen, Reference Ketchen, Fiss, Cambré and Marx2013; Misangyi et al., Reference Misangyi, Greckhamer, Furnari, Fiss, Crilly and Aguilera2016). Configurations represent a specific combination of variables leading to a given outcome (e.g., Rihoux & Ragin, Reference Rihoux and Ragin2009) and are not new to social sciences. In fact, configurational hypotheses have been addressed for years in research by applying regression models to test moderation effects (e.g., Grofman & Schneider, Reference Grofman and Schneider2009). However, this methodological approach has a limit, the number of possible interaction terms maxes out at three variables (e.g., Dawson & Richter, Reference Dawson and Richter2006; Igartua & Hayes, Reference Igartua and Hayes2021), thus leaving much of the possibilities for configurational testing out of its reach.
For this reason, a number of studies have started using alternative methodologies to further understand and explore the role of configurations in organizational studies (e.g., Ong & Johnson, Reference Ong and Johnson2021). One of these emerging methods is qualitative comparative analysis (QCA), and its variant fuzzy set qualitative comparative analysis (FsQCA). This family of methods belongs to the comparative case-oriented approach that combines concepts from set theory and Boolean algebra (e.g., Furnari et al., Reference Furnari, Crilly, Misangyi, Greckhamer, Fiss and Aguilera2021). Although their prior application to case studies, those methods are currently adopted to analyze empirical data and generalize the results considering the possibility of replication in future research (Parente & Federo, Reference Parente and Federo2019; Roig-Tierno et al., Reference Roig-Tierno, Gonzalez-Cruz and Llopis-Martinez2017).
Unlike correlational approaches based on additive and linear attributes, QCA methods identify the relationships and interdependencies of multiple factors related to a given criterion, allowing for the investigation of the conjunct, equifinal, and asymmetric effects (Fiss, Reference Fiss2011). In doing so these methods analyze patterns of variables that are jointly related to a dependent variable (Ragin & Fiss, Reference Ragin, Fiss and Ragin2008). For this reason, fsQCA is being increasingly applied in organizational psychology as it seems to represent a suitable option to effectively test configurational hypotheses and offer robust results to enrich the literature.
In recent years, an increasing number of studies adopting fsQCA have been published in the field of organizational psychology (e.g., Cangialosi et al., Reference Cangialosi, Battistelli and Odoardi2021), but, despite this, configurational theory and its application in research are still not sufficiently covered in organizational research (e.g., Meier, Reference Meier2017; Ott et al., Reference Ott, Sinkovics, Hoque, Cassell, Cunliffe and Grandy2018). This is partially due to the lack of knowledge of the potential of this methodology, of which guidelines to follow, and of the actual procedures to be implemented.
In order to fill this gap and to further promote the correct application of fsQCA, the present study aims at three objectives: (a) To highlight some of the most relevant theoretical-methodological features of the method, (b) to offer a detailed description of FsQCA in its various phases by providing a perspective on the most frequently adopted guidelines and, (c) to provide step-by-step instructions on how to run FsQCA using the QCA package in R.
Fuzzy Set Qualitative Comparative Analysis
Key Concepts behind the Methodological Approach
Originating over three decades ago (Ragin, Reference Ragin1987), QCA has enriched the toolbox of empirical research methods. QCA is an analytic methodology that combines quantitative and qualitative techniques by using Boolean logic and set theory instead of correlation methods (Ragin, Reference Ragin2000, Reference Ragin, Box-Steffensmeier, Brady and Collier2009; Ragin & Fiss, Reference Ragin, Fiss and Ragin2008). For this reason, in all QCA based methods independent and dependent variables are referred to as “conditions” and “outcomes”, respectively. A condition reflects the set membership of a variable used to explain the outcome and an outcome to a set membership in a variable explained by the conditions (Ragin, Reference Ragin1987). The method was initially restricted to small samples, but further advancements allowed for its usage for more numerous data (e.g., Parente & Federo, Reference Parente and Federo2019).
QCA is composed of several variants, the two most common are crisp-set (csQCA) and fuzzy-set (fsQCA). CsQCA is the original form of QCA, and its objective was to simplify complex configurations with the use of Boolean logic. CsQCA employs categorical conditions, giving each condition a value of either 1 (membership) or 0 (non-membership), identifying combinations that consistently lead to a result using Boolean expressions that detect irrelevant conditions. The main concern with this approach is its limited applicability in the social sciences; in fact, the investigation phenomena are hardly attributable to dichotomies.
To overcome that limitation, the variant fsQCA was later developed (Ragin, Reference Ragin2000, Reference Ragin, Box-Steffensmeier, Brady and Collier2009). Fuzzy sets are “a class of objects with a continuum of grades of membership. Such a set is characterized by a membership (characteristic) function which assigns to each object a grade of membership ranging between zero and one” (Zadeh, Reference Zadeh1965, p. 338). The application of fuzzy sets to QCA allows the transformation of any value on an infinite continuum of degrees of membership ranging from 0 to 1 with 0.5 as the cross-over point or point of maximum ambiguity (Duşa, Reference Duşa2019). In terms of research applications in the social sciences, fsQCA has quickly surpassed its original counterpart (csQCA) due to its capacity to handle configurations of causal conditions based on the degree of membership as opposed to category memberships (Nikou et al., Reference Nikou, Mezei, Liguori and El Tarabishy2022; Pappas & Woodside, Reference Pappas and Woodside2021).
Causal Complexity
QCA allows the distinction of causes into necessary and sufficient conditions (Furnari et al., Reference Furnari, Crilly, Misangyi, Greckhamer, Fiss and Aguilera2021; Misangyi et al., Reference Misangyi, Greckhamer, Furnari, Fiss, Crilly and Aguilera2016x). Necessary conditions denote that the focal outcome can only be obtained in the presence of the causal factor, and sufficient conditions that the presence of the causal factor always results in the focal outcome (Fiss, Reference Fiss2007). However, in applied research contexts it is very rare to identify single conditions capable of being necessary or sufficient, more often it is conjunctions of conditions that are associated with an outcome. The development of the QCA methodology is therefore intrinsically linked to the so-called principle of causal complexity (Gerrits & Pagliarin, Reference Gerrits and Pagliarin2021).
Causal complexity in the social sciences can be defined as “a situation in which a given outcome may follow from several different combinations of causal conditions” (2008, p. 124). This definition of causal complexity entails three aspects:
Conjunctural causation – When a combination of conditions produces the outcome. The complex causality approach assumes that specific patterns and combinations of conditions lead to an outcome (Ragin, Reference Ragin2014). In other words, phenomena are seen as results of interconnected attributes as a total, and not of individual and separable entities (Aus, Reference Aus2009). From this standpoint, patterns and combinations of causal conditions are responsible for the outcome rather than individual independent variables. As a result, complex causality-based methods focus on identifying different conditions interact that produce the desired outcome, uncovering causal combinations or recipes.
Causal asymmetry – When an outcome can result from the presence or from the absence of a given condition. According to the complex causality perspective, the fact that A causes B does not imply that B is connected to A in the same way (Vassinen, Reference Vassinen2012). Moreover, the combination of factors that lead to the presence of a result might differ from those leading to the absence of the same outcome. In other words, it is not necessary for factors causing the absence of one outcome to be the opposite of those causing the presence of that same outcome, as conditions “found to be causally related in one configuration may be unrelated or even inversely related in another” (Meyer et al., Reference Meyer, Tsui and Hinings1993, p. 1178). Consequently, methods adopting a complex-causality perspective need to test outcomes for the presence and absence of conditions leading to both the outcome and its absence.
Equifinality – When there is more than one path leading to the same outcome. The concept of equifinality entails the existence of multiple distinct configurations of conditions leading to the same outcome (Nikou et al., Reference Nikou, Mezei, Liguori and El Tarabishy2022). The principle says that the same results may be reached regardless of the solutions or paths used therefore, different causal recipes may yield the same outcome (Rippa et al., Reference Rippa, Ferruzzi, Holienka, Capaldo and Coduras2020; Rubinson et al., Reference Rubinson, Gerrits, Rutten and Greckhamer2019; Schneider & Eggert, Reference Schneider and Eggert2014).
Therefore, methods based on assumptions of causal complexity, like QCA, aim at providing multiple causal recipes composed of non-mutually exclusive conjoint attributes equally sufficient to the occurrence of an outcome (Gerrits & Pagliarin, Reference Gerrits and Pagliarin2021).
Hands On Tutorial for Using FsQCA
Sample Description
This section is aimed at showing the principal distinct procedures of performing FsQCA using a data set from an Italian manufacturing company (see Appendix). The data analysis focuses on 4 dimensions of the work-based learning scale (Nikolova et al., Reference Nikolova, van Ruysseveldt, De Witte and Syroit2014), specifically learning through reflection (LTR), learning through experimentation (LTE), learning from colleagues (LFC), and learning from supervisors (LFS) as conditions. Work-based learning is defined by the perception of informal learning opportunities available in the workplace (e.g., Cangialosi et al., Reference Cangialosi, Odoardi and Battistelli2020). The scale comprises a total of 12 items, 3 for each dimension. Example of an items are “in my work I am given the opportunity to contemplate about different work methods” for LTR; “in my job I can try different work methods even if that does not deliver any useful results” for LTE; “my colleagues are eager to collaborate with me in finding a solution to a work problem” for LFC; “my supervisor tips me on how to do my work” for LFS. As the outcome the example concentrated on the innovative work behavior (; Janssen, Reference Janssen2000), representing the intentional creation, introduction, and application of new ideas within a work role, group, or organization, in order to benefit role performance, the group, or the organization (p. 228). The scale includes 9 items, one example is “in my job I generate original solutions for problems”. Both scales were previously adopted ad validated in the Italian language (e.g., Battistelli et al., Reference Battistelli, Odoardi, Cangialosi, Di Napoli and Piccione2022; Cangialosi et al., Reference Cangialosi, Deprez, Odoardi and Battistelli2019).
The various steps and procedures were performed using the QCA package (Duşa, Reference Duşa2019) in R Core TeamFootnote 1. Before presenting the main steps of FsQCA, it is important to note that statistical methods traditionally used to test the reliability and validity of constructs should be performed previously. These measures are particularly important when the values derive from rating scales used to measure opinions, attitudes, or behaviors, which are the most frequent in the organizational literature. However, the present study will only present the results without showing how to perform these tests as they are out of the spectrum of QCA-related methods.
The McDondald’s coefficient was computed in order to assess the reliability of the research variables with values ranging above the recommended cut-off of .70 (Nunnally, Reference Nunnally and Wolman1978; innovative work behavior [IWB], ω = .95; LTR, ω = .89; LTE, ω =.88; LFC, ω = .85; LFC; ω = .85). Additionally, confirmatory factor analysis was used to assess the measurement’s validity for the proposed model. In the present study, all indices indicated good fit to the data for the five-factor solution: χ2(179) = 267.464, comparative fit index (CFI) = .949, Tucker-Lewis index (TLI) = .940, root mean square error of approximation (RMSEA) = .064, standardized root mean square residual (SRMR) = .063 with the factor loadings ranging from .61 to .89.
Calibration
The first step of fsQCA is to carry out the data calibration procedure. This process allows for the transformation of the values of the study variables from raw numerical data into membership scores, or fuzzy sets (Duşa, Reference Duşa2019). Most types of data can be calibrated into fuzzy scores (e.g., test results, performance ratings, socio-demographic indicators, etc.); however, as typically psychosocial research relies on survey responses, it is important to note that where a variable includes several items, the calibration process works with the average aggregate value of the construct. Fuzzy sets are pseudo-continuous measures ranging from 0 to 1 (Ragin, Reference Ragin2000). From a set theory perspective this implies that the level of values is reflective of the degree of inclusion to a specific set: The more the condition belongs to the set the higher the value associated with it, where 1 represents full membership in a set, 0 no membership and 0.5 represents the point of maximum ambiguity.
There are different approaches to this procedure, however, the so-called direct calibration is emerging as common practice in the field as it leads to results that are directly comparable with others found in the literature (Pappas & Woodside, Reference Pappas and Woodside2021). The direct calibration requires indicating three anchoring values for full inclusion, full exclusion, and the cross-over point. In practical terms this means that specific values must be established to determine what can be regarded as low levels (no membership to a set), high levels (full membership in the set), and, additionally, a crossover point where values cannot be classified high or low (neither in nor out the set). As a result, the calibration process will allocate fuzzy scores ranging from 0.50 to 1 to raw values between the full inclusion anchor and the point of maximum ambiguity, and from 0 and 0.50 to those between the point of maximum ambiguity and the full exclusion anchor.
The anchors should be selected based on theoretical or substantive principles (Ragin, Reference Ragin, Box-Steffensmeier, Brady and Collier2009; Schneider & Wagemann, Reference Schneider and Wagemann2010) and replicability of the process is ensured by transparency when defining them (Greckhamer et al., Reference Greckhamer, Furnari, Fiss and Aguilera2018). The anchor selection can be guided by allocating a particular value point on the Likert or aided by statistical measures, such as percentiles. The most common anchor values for full full-exclusion, crossover, and full inclusion are, respectively for a 7-point Likert scales 2, 4, and 6 (e.g., Meuer, Reference Meuer2014) and for 5-point Likert values 2, 3, and 4 (e.g., Pappas & Woodside, Reference Pappas and Woodside2021). As per, the percentiles approach, the usual anchor values 5th, 50th, and 95th percentiles (Pappas & Woodside, Reference Pappas and Woodside2021); however, the 20th, 50th, and 80th percentiles are also considered appropriate choices when data are asymmetrical or that do not conform to normality assumptions (Pappas, Reference Pappas2017).
The calibration process is carried out through the mathematical estimation of the degree of membership for any given raw number by calculating the equivalent of the log odds (Duşa, Reference Duşa2019). Many types of functions can be used to aid this procedure (e.g., Thiem, Reference Thiem2014). Nevertheless, a logistic function is frequently preferred as it allows accounting for the normal distribution of data points. By giving more weight to the cross-over point, the logistic function accounts for the fact that values that are normally distributed tend to cluster around the mean (Duşa, Reference Duşa2019). Figure 1 illustrates the numerical relationship between raw data and calibrated values resulting from the logistic function. Raw data (1–5) is presented on the horizontal axis while fuzzy scores (0–1) are presented on the vertical axis. The figure shows each original value converted into a fuzzy score, starting with the low scores in the lower left and gradually moving towards the high ones in the upper right.
Once the calibration process is completed, it is important to subsequently verify that no fuzzy scores fall on the value of 0.50. Because the point of maximal ambiguity cannot be characterized as belonging to or not belonging to the set, configurations with at least one condition at the crossover point are unavoidably excluded from the analysis (Wagemann et al., Reference Wagemann, Buche and Siewert2016). To avoid this occurrence, researchers commonly add a constant of 0.001 to the original fuzzy scores (e.g., Fiss, Reference Fiss2011).
The presented study adopts the direct approach for calibration and selects the 20th, 50th, and 80th percentiles as anchor values (see Table 1). After loading QCA package in R and having imported the data, percentiles are computed and subsequently used as anchor values. The calibration is performed with the calibrate function, with the selection of the three breakpoints (from the lowest to the highest) and the specification of the choice of the logistic function. In addition, fuzzy scores of 0.50 are replaced with 0.501 to prevent dropping significant configurations from the subsequent analyses. As a result of the calibration process, all values of study variables are subsequently transformed into fuzzy scores.
Note. IWB = innovative work behavior; LTR = learning through reflection; LTE = learning through experimentation; LFC = learning from colleagues; LFS = learning from supervisors.
Necessity Conditions Analysis
The second step of the fsQCA is the necessity conditions analysis. This is carried out to check whether any of the conditions is necessary for the outcome. A condition is defined as necessary when the result does not occur in its absence and while this type of causal relationship may not be sufficient to trigger the result by itself, is deemed a necessary portion of the causal mix (Duşa, Reference Duşa2019). For example, if learning from the supervisor is a necessary condition for IWB, this entails that an employee can only have high levels of IWB when the levels of learning from the supervisor are also high; however, this doesn’t translate into the fact that learning from the supervisor alone is causing employees’ IWBs.
Due to the asymmetrical proprieties of this methodological approach, in the analysis of necessity it is important to test both the presence and absence of each condition on the result and its negation. This is because converting any value to its opposite does not necessarily mean an explanation for the opposite of the result. For instance, if high levels of learning from the supervisor are found necessary for IWB, this doesn’t automatically imply lack of IWB is due to a low level of learning from the supervisor.
The verification of the absence of a relationship of necessity among variables is critical. However, as the organizational field typically analyzes issues characterized by a complex network relationship of multiple variables, it is extremely rare to find a necessary condition that alone can offer a substantial causal contribution.
In fsQCA, conditions are considered necessary when their consistency value is greater than 0.90 in the analysis of necessity, which translates into that at least 90% of the observations showing high-levels of the outcome also have high-levels of the condition. Although this threshold is set to account for data noise, randomness, and measurement inaccuracies (Ragin, Reference Ragin2000; Schneider & Wagemann, Reference Schneider and Wagemann2010), even when the consistency value surpasses 0.90, the result can be subject to Type 1 errors (Ragin, Reference Ragin2006).
This case is referred to as triviality, as opposed to relevance; a trivial condition can be described as a much larger set than the outcome. An example of this could be the necessity of a minimum amount of job tenure for innovation. An employee must have at least a minimal experience on the job in order to generate, promote and implement new and useful ideas in a workplace. But this is a trivial necessary condition because the mere presence of job tenure is an overarching condition that does not directly cause individual innovation.
To measure the relative importance of a condition in the causal mix and avoid Type 1 errors coverage scores can be adopted as an auxiliary indicator (Schneider & Wagemann, Reference Schneider and Wagemann2010). Coverage scores represent the percentage of X that is covered by Y, or more precisely by the intersection of X and Y, assuming that Y is already a (perfect) subset of X (Duşa, Reference Duşa2019). Therefore, a rule of thumb is that a condition is necessary when it concurrently shows consistency above 0.90 and coverage above 0.60 (Mattke et al., Reference Mattke, Maier, Weitzel and Thatcher2021).
The package QCA has the dedicated function pofwith a default value of the argument relation = “necessity”, which calculates both consistency and coverage, the first indicated with inclN and the latter with covN. An example of the applied function is reported below, complete results are reported in Table 2, showing that none of the study conditions is necessary to the outcome or its absence.
Note. ~ indicates absence (i.e., low levels); IWB = innovative work behavior; LTR = learning through reflection; LTE = learning through experimentation; LFC = learning from colleagues; LFS = learning from supervisors.
Truth Table
The third step of fsQCA involves generating the truth table. This is done to identify all the possible combinations of conditions that are present in the data and evaluate how each combination is consistent with the outcome. In the truth table all potential combinations of conditions are comprised, consequently yielding 2k rows, (k = the number of conditions). In the presented example, the 4 study conditions produce 16 rows, one for each configuration. As displayed in Table 3 each row reports the presence (1; when the fuzzy score is greater than 0.5) and absence (0; when the fuzzy score is smaller than 0.5) of the conditions for each possible combination.
Note. IWB = innovative work behavior; LTR = learning through reflection; LTE = learning through experimentation; LFC = learning from colleagues; LFS = learning from supervisors.
Following its generation, the truth table must subsequently be refined to allow systematic calculations for establishing which configurations are leading to the outcome. The refinement of the truth table is accomplished by assigning specific cut-offs to frequency and consistency values.
The frequency represents the number of cases ascribed to a given configuration. A threshold is necessary to ensure that the analysis for the evaluation of the relationships with the outcome is carried out on a sufficient number of cases. When setting a high-frequency threshold, the overall number of cases decreases, but there is an increment in the coverage of the sample that is explained by the retained configurations. Conversely, with a low-frequency threshold, the overall number of cases increases, but the coverage of the sample decreases and resultant configurations are likely to be rare to the data reducing the stability for the subsequent analyses. In consequence, both may potentially result in findings that are not replicable. Hence, when choosing a frequency threshold, a balance must be struck between finding common configurations while keeping a sufficient pool of cases. The rule of thumb for determining a frequency threshold is 2 for small samples (N < 150) and 3 or more for samples larger than that, as long as at least 80 percent of the cases are retained (Greckhamer et al., Reference Greckhamer, Misangyi, Fiss, Fiss, Cambré and Marx2013; Fiss, Reference Fiss2011; Ragin, Reference Ragin, Box-Steffensmeier, Brady and Collier2009).
After removing rows that fail to meet the frequency threshold, the truth table should be further refined for consistency. Consistency measures the degree of association of a configuration with the study result, that is the cooccurrence of the combination with high values of the outcome variable. A high consistency score indicates that the configuration consistently leads to the outcome, for instance, a consistency of 0.75 would indicate that 75% of the cases in the configuration share the same outcome. This measure can be compared to the concept significance in correlational methods with a minimum acceptable value of 0.75 (Greckhamer et al., Reference Greckhamer, Furnari, Fiss and Aguilera2018; Ragin, Reference Ragin, Box-Steffensmeier, Brady and Collier2009). However, to increase the reliability of the results, the consistency threshold should be determined after a thorough inspection of the data. Finding natural breaking points in the consistency values has been suggested as a sound indicator for choosing the consistency threshold (Pappas & Wodside, Reference Pappas and Woodside2021).
In addition, a second type of consistency should be used to further refine the truth table, the PRI consistency (i.e., Proportional Reduction in Inconsistency). A PRI cut-off is employed to deal with the issue of simultaneous subset relations (i.e., when configurations are related to both presence and the absence of the outcome). PRI cut-off should be set at a minimum of 0.50 (Greckhamer et al., Reference Greckhamer, Furnari, Fiss and Aguilera2018) and possibly close to consistency scores. When setting consistency and PRI cut-offs, the outcome for all configurations with a lower score than the selected ones, are automatically set to 0 and consequently excluded from the analysis.
In this study, the cutoff value was set at 3 for frequency, 0.75 for consistency, and 0.50 for PRI. The package QCA has a dedicated function called truthTablewhich allows the building of the truth table and its refinement based on given thresholds. Table 3 presents the unrefined truth table for the study variables, encompassing all 16 possible configurations.
Sufficiency Analysis
The final phase of FsQCA is the sufficiency analysis. Sufficiency can be defined as propriety whereby all cases possessing specific attributes experience the same given outcome. Hence, this type of analysis seeks to identify all the different configurations that consistently lead to the outcome. This process is carried out with the usage of the “Quine-McCluskey” algorithm for Boolean minimization (or logical reduction).
Boolean minimization reduces the complexity of all suitable configurations without dropping any relevant information (Ragin, Reference Ragin2014). This process involves detecting and omitting conditions that are logically redundant (“don’t care situation”). Those are the conditions present in a given configuration that do not really contribute to the outcome. For example, if by comparing two configurations that lead to the same result, one showing high and the other low levels on the same condition, it is possible to logically infer that the value of that condition is not relevant for the occurrence of the outcome. Logical minimization thus leads to configurations stripped of the redundant conditions.
Three sets of solutions (complex, parsimonious, and intermediate) are calculated for the sufficiency analysis. These types of a solution differ in how they deal with counterfactuals, combinations of conditions that have not reached the frequency threshold and therefore lack sufficient empirical evidence. In other words, those are the combinations that have an insufficient number of cases to make a meaningful inference about their relationship with the outcome. The complex solution fully excludes counterfactuals from the analysis whilst the parsimonious allows for any that can help provide a logically simpler solution. The intermediate solution makes use of “easy counterfactuals”, those who are theoretically plausible (Liu et al., Reference Liu, Mezei, Kostakos and Li2017), which oftentimes translates into assuming their role in the outcome.
In addition, the complex solution offers all the possible combinations present after the application of logical operations. In general, the interpretation of complex solutions is quite difficult as they result in many configurations with several conditions. For this reason, researchers usually rely on parsimonious and intermediate solutions for drawing their conclusions. The parsimonious solution is based on all possible simplifying assumptions and presents the “core conditions” which are important as are present in all three types of solution (Fiss, Reference Fiss2011). In the intermediate solutions appear both “core” conditions and the “peripheral” conditions which show in the intermediate solution and are eliminated in the parsimonious solution (Fiss, Reference Fiss2011).
The package QCA has a dedicated function called minimizewhich allows for the generation of the three solutions. Table 4 presents the intermediate result of the sufficiency analysis distinguishing core and peripheral conditions. By comparing the intermediate and parsimonious solutions core and peripheral conditions can be established. Each sufficient configuration presents values for raw coverage, unique coverage, and consistency, Table 5 contains the information for their interpretation.
Note. • indicates presence (i.e., high levels), ø indicates absence (i.e., low levels), blank space indicates “don’t care” (levels of the condition are not relevant to that configuration in regard to the outcome); Large circles suggest “core” or central conditions, while small circles indicate “contributing” or peripheral conditions; IWB = innovative work behavior; LTR = learning through reflection; LTE = learning through experimentation; LFC = learning from colleagues; LFS = learning from supervisors.
Note. Adapted from Ragin (Reference Ragin2006).
The results show the existence of two configurations sufficient for high levels of IWB. The first configuration is characterized by high levels of learning through reflection and low levels of learning through experimentation as core condition and high levels of learning from supervisors as peripheral. The second configuration is described by high levels of learning through reflection, learning from colleagues and learning from supervisors, all as core conditions. In addition, the results also highlight the existence of two configurations leading to low levels of IWB. The first configuration is characterized by low levels of learning through reflection and learning from colleagues as core conditions. The second is characterized by low levels of learning from colleagues and learning from supervisors as core conditions with high levels of learning through experimentation as peripheral.
Robustness Checks
As an additional step, sensitivity analyses are performed to assess the robustness of the results after generating and reporting the solutions. This stage is necessary because FsQCA relies on guidelines and recommendations to determine thresholds, and, accordingly, it’s key to control whether changing various parameters and cut-offs can significantly alter the results and subsequently report the extent of the change. To this end, common practices require the reporting of findings with different frequency, consistency, and PRI thresholds along with those performed on recalibrated data with alternative anchoring values.
Table 6 shows, as an example, the results of repeated analyses with different cut-offs and calibration levels for IWB. From the analysis of the table, the robustness of the original results can be inferred based on two criteria, the different results obtained have limited differences in terms of coherence and coverage and do not provide indications for different interpretations; and/or are in a subset relationship to the original results (Wagemann & Schneider, Reference Wagemann and Schneider2015).
Note. LTR = learning through reflection; LTE = learning through experimentation; LFC = learning from colleagues; LFS = learning from supervisors.
Conclusion
The assumptions of causal complexity are inherent in psychosocial processes occurring in organizational environments, where multiple components jointly interact with one another (Fiss, Reference Fiss2011; Ott et al., Reference Ott, Sinkovics, Hoque, Cassell, Cunliffe and Grandy2018; Rihoux & Ragin, Reference Ragin, Box-Steffensmeier, Brady and Collier2009). As a result, rather than single and isolated components, organizational phenomena can be more accurately explained by configurations leading to a certain criterion (e.g., Short et al., Reference Short, Payne and Ketchen2008). Therefore, methods like fsQCA designed to address configurational hypotheses have significant potential applications for organizational research (e.g., Parente & Federo, Reference Parente and Federo2019; Schwab & Golla, Reference Schwab and Golla2021).
Compared to traditional analysis techniques, fsQCA is that it is well-suited to studying complex, non-linear relationships between variables (Schneider & Wagemann, Reference Schneider and Wagemann2010). It also allows researchers to analyze the effects of configurations of variables, rather than simply examining the effects of individual variables (Ragin, Reference Ragin2006). However, fsQCA can be computationally intensive, and the results are often difficult to interpret for researchers who are not familiar with the technique (Caren & Panofsky, Reference Caren and Panofsky2015). Additionally, the validity and reliability of the results can be affected by the choice of membership functions and calibration methods used in the analysis (Schneider & Wagemann, Reference Schneider and Wagemann2010).
By employing the fsQCA approach effectively, organizational psychology can gain further insights into the nature of causal configurations. But for this to happen, continuous development and refinement of its application for research in organizational psychology is required. The fsQCA method has filtered in many ways within the research contexts, and great strides have been made in its dissemination, however, so far there has been a lack of dedicated work to inform the community of its practices and correct guidelines (Rihoux & Ragin, Reference Rihoux and Ragin2018). In order to fill this gap, this paper has introduced fsQCA by providing a general overview of the method, offered guidelines for good practice which provide further background knowledge and know-how regarding the use of this method, and outlined a practical example with a step-by-step tutorial for utilizing QCA for R package.
This article will contribute to the research community, in the hope of further stimulating interest in configurational approaches in the field of organizational psychology and serve as a useful starting point for readers who want to learn more about this methodology and provide an additional resource to help organizational psychologists deliver high-quality research.