Hostname: page-component-586b7cd67f-2plfb Total loading time: 0 Render date: 2024-11-23T16:43:39.891Z Has data issue: false hasContentIssue false

Quantifying the scientist–practitioner gap: How do small business owners react to our academic articles?

Published online by Cambridge University Press:  27 August 2024

Steven Zhou*
Affiliation:
George Mason University, Fairfax, VA, USA
Lauren N.P. Campbell
Affiliation:
George Mason University, Fairfax, VA, USA
Shea Fyffe
Affiliation:
George Mason University, Fairfax, VA, USA
*
Corresponding author: Steven Zhou; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Much ink has been spilled on the scientist–practitioner gap, that is, the apparent divide between knowledge published in academic peer-reviewed journals and the actual business practices employed in modern organizations. Most prior papers have advanced meaningful theories on why the gap exists, ranging from poor communication skills on the part of academics to paywalls and other obstacles preventing the public from accessing research in industrial-organizational psychology (I-O). However, very few papers on the scientist–practitioner gap have taken an empirical approach to better understand why the gap exists and what can be done about it. In our focal article, we specifically discuss the gap as it pertains to small businesses and present empirical data on the topic. Drawing from our experiences working with and in small businesses before entering a PhD program, we suggest that a primary reason for the existence of this gap is the differences between large and small businesses, and we advance two theory-driven reasons for why this is the case. Next, we compiled abstracts and practical implications sections from articles published in top I-O journals in the past 5 years, then we collected ratings and open-ended text responses from subject matter experts (i.e., small business owners and managers) in reaction to reading these sections. We close by recommending several potential perspectives, both for and against our arguments, that peer commentators can take in their responses to our focal article.

Type
Focal Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Society for Industrial and Organizational Psychology

Introduction

Theory is when you know everything, but nothing works.

Practice is when everything works, but no one knows why.

In our lab, theory and practice are combined: nothing works and no one knows why.

(Anonymous, n.d.)

Joking aside, this meme poignantly critiques the I-O and related fields for insufficiently bridging the scientist–practitioner gap; that is, the phenomenon wherein “research results do not address existing problems and practical needs” (Belli, Reference Belli2010, p. 2). Because it is based on the “scientist-–practitioner model,” the field of I-O, in particular, should be engaging in practices that bridge this divide, for example, by communicating and applying empirical, academic research to the workplace workplaces and allowing workplace phenomena to inform and guide academic research.

Sadly, many would agree that the divide between I-O research (especially those published in academic peer-reviewed journals) and practitioners still exists. Several important perspectives articles have laid out the growing problems of this scientist–practitioner gap (Banks et al., Reference Banks, Pollack, Bochantin, Kirkman, Whelpley and O’Boyle2016; Bartunek & Rynes, Reference Bartunek and Rynes2014; Rotolo et al., Reference Rotolo, Church, Adler, Smither, Colquitt, Shull, Paul and Foster2018). These studies suggested that academic research is behind the times, that academics and practitioners differ in their ratings of article quality, and that the content areas represented by academic journals versus HR practitioner outlets differ (Deadrick & Gibson, Reference Deadrick and Gibson2007; Nicolai et al., Reference Nicolai, Schulz and Göbel2011; White et al., Reference White, Ravid, Siderits and Behrend2022). Although this has been a persistent challenge, recent societal upheavals and changes (e.g., COVID, social justice movements) have made it all the more important for scholars researching key topics, such as managing remote employees during crises or best practices to promote inclusivity in organizations, to effectively engage the public and communicate their findings (Goldstein et al., Reference Goldstein, Murray, Beard, Schnoes and Wang2020; Kossek & Lee, Reference Kossek and Lee2020; Lewis & Wai, Reference Lewis and Wai2021; Rogelberg et al., Reference Rogelberg, King and Alonso2022).

The present article builds on the existing foundation of research on the scientist–practitioner gap by specifically raising the question about the relevance of I-O research for small businesses. We argue that much of the published research in I-O psychology has been based on theories that center around large organizations—often failing to acknowledge the circumstances of small businesses. For example, consider leadership theories: Common sense suggests that serving as the CEO of a 10-person company is very different from serving as the CEO of a 100,000-person company, yet most leadership theories fail to distinguish between the two. Even so, small businesses are where the “action” is for most people. In the US, 99.9% of businesses are considered small businesses (i.e., fewer than 500 employees; U. S. Small Business Administration, 2022). Moreover, many more people may work in jobs that are not even measured or conceptualized as jobs as traditionally defined in a developed economy (e.g., sustenance farming). Thus, we argue that, in our ongoing discussion on the academic–practitioner gap in I-O research, we have missed a critical aspect that comes from considering the purported beneficiaries of our research—in this case, small business owners and managers who might benefit from the knowledge generated in I-O. In other words, we believe that the gap widens when we consider small businesses, and this concept deserves much discussion and debate to better understand if, how, and why I-O research needs to be better situated for small business audiences.

Given the dearth of empirical data investigating the scientist–practitioner gap, we introduce and examine subject matter expert (SME) empirical ratings of academic articles to guide our arguments. In what follows, we begin by reviewing the literature on the scientist–practitioner gap, then explain the rationale for why we should specifically focus on small businesses. After, we make two propositions for why I-O research may not be applicable and relevant for small businesses, which were supported by evidence taken from our SME ratings. We conclude with an Invitation for Debate section, where we offer disputable example perspectives that commentaries can take in response to our article, both in support of and in opposition to our arguments.

The need for relevance

The scientist–practitioner gap is not a new concept, nor is it exclusive to I-O psychology. Belli (Reference Belli2010) summarized the issue across multiple fields including computer science, education, healthcare, management, and political science, noting:

A common interpretation of the divide between theory and practice, regardless of the field, refers to the dichotomy between two cultures. On one side are the researchers, intent on the rigors of sound academic research but divorced from the ongoing concerns of practice, and who are dismayed about the fact that practitioners are not reading or using their research results. On the other side are the practitioners, concerned with relevance in terms of bettering their practice but not interested in theoretical reasoning, and who claim that research results do not address existing problems and practical needs. (p. 2)

The summary captures the gist of the concerns motivating research on the scientist–practitioner gap, from both academics and practitioners.

More recent studies on the scientist–practitioner gap, specifically in the field of I-O and business management, have added nuance and empirical evidence to this perspective. Bartunek and Rynes (Reference Bartunek and Rynes2014) published a guest editorial in the Journal of Management focusing specifically on strategies for reducing the tension between academics and practitioners. Their review of the literature revealed several insights. First, research on the scientist–practitioner gap has increased considerably since 2000, yet only 13% of the publications are empirical; most are opinion editorials. Second, most of these papers are written by academics in academic journals and aimed at fellow academics with little engagement from practitioners. Finally, they recommend that future empirical research leverages data to address the tensions between academics and practitioners, for instance data that illustrates how academics can better partner with practitioners to conduct and disseminate research. A similar review by Banks et al. (Reference Banks, Pollack, Bochantin, Kirkman, Whelpley and O’Boyle2016) published in the Academy of Management Journal analyzed 38 interviews and 1,767 survey responses from both academics and practitioners to propose a conceptual model for studying the scientist–practitioner gap. Their analysis suggested that the gap could be addressed through two primary avenues: (a) fostering collaboration to align on relevant content areas and (b) incentivizing communication channels that are more effective for knowledge transfer.

In addition to these large-scale reviews, several empirical studies have demonstrated the existence and importance of the gap. For example, Deadrick and Gibson (Reference Deadrick and Gibson2007) analyzed 4,356 articles from both academic-oriented journals (Journal of Applied Psychology and Personnel Psychology) and practitioner-oriented outlets (Human Resource Management and HR Magazine). They found that academic journals tended to focus on motivation and staffing, whereas practitioner outlets tended to focus on compensation and HR department innovations/effectiveness. Although motivation and compensation are certainly related, and HR effectiveness is often operationalized through measures of staffing effectiveness, the differences in nomenclature may suggest differences in how studies are conceptualized and communicated. Similarly, Baldridge et al. (Reference Baldridge, Floyd and Markóczy2004) analyzed 120 articles published in six leading management journals in 1994 and 1995 and asked 31 board members of the Academy of Management Executive to rate each article’s relevance to practitioners. They found only a weak correlation (r = 0.20) between weighted citation count (a proxy for article quality) and relevance ratings. Finally, Nicolai et al. (Reference Nicolai, Schulz and Göbel2011) partnered with Zeitschrift für Führung und Organisation—a German leadership and organizations journal (impact factor = 0.82 as of 2021)—to compare reviewer feedback from 315 academics and 263 practitioners from 1995 to 2005, finding only a weak correlation (r = 0.19) between the accept/reject recommendations provided by each group.

Although this large, robust, and growing literature demonstrates the importance of addressing the scientist–practitioner gap, two broad questions remain unanswered. First, why is there such a gap? Many perspectives articles have proposed some ideas, but as Bartunek and Rynes (Reference Bartunek and Rynes2014) noted, such efforts are (a) primarily written from an academic perspective and/or (b) lack empirical evidence. Drawing from our experiences working with and in small businesses before entering a PhD program, we suggest that a primary reason for the existence of this gap is the inherent differences between large and small businesses. In the next section, we explain why we believe that I-O research has focused too much on large businesses and how this creates a gap preventing research findings from being effectively applied to small businesses. Second, what empirical evidence is there of such a gap? In this focal article, we collected the abstract and practical implications sections from articles published in top I-O journals in the past 5 years. We then collected ratings and open-ended text responses from subject matter experts (i.e., small business owners and managers) in reaction to reading these sections. Thus, we believe this focal article advances meaningful contributions to the discussion on the scientist–practitioner gap, in addition to proposing debatable ideas and recommendations that will elicit interesting and insightful commentary responses.

Are we too focused on big businesses?

The foundational mission of the Society for Industrial and Organizational Psychology is to “enhance human well-being and performance in organizational and work settings by promoting the science, practice, and teaching of industrial-organizational psychology” (SIOP, https://www.siop.org/About-SIOP/Mission). Clear in this statement is an intention to support I-O psychologists on both sides of the scientist–practitioner gap. What is unclear, however, is which organizations and work settings are meant to be enhanced by these I-O academics, practitioners, and teachers. To understand where I-O might provide the most impact, it is helpful to identify what most organizations are like and, similarly, what types of organizations employ most of the workforce. In the United States, this overwhelmingly means small.

Small businesses in the U.S. are appraised by the Small Business Administration (SBA), which relies on data from the U.S. Census Bureau. SBA released official standards by which an organization might be considered small across the industry categories defined by the 2012 North American Industry Classification System (NAICS). The largest of these standards includes businesses with up to 1,500 employees and $38.5M in average receiptsFootnote 1 ; other industries, however, define small businesses as ranging from fewer than 150 to fewer than 500 employees (U. S. Small Business Administration, 2016). At the same time, using a different definition of small business (i.e., “fewer than 500 employees”), the SBA Office of Advocacy reported that 33.2M small businesses comprise 99.9% of U.S. businesses (U. S. Small Business Administration, 2022). Of these 33.2M small businesses, 82% have no regular employees, such as sole proprietorships or general partnerships; 16% of small businesses have fewer than 20 employees; and 2% of small businesses have between 20 and 500 employees. Overall, small businesses employ 61.7M employees, which is 46.4% of the U.S. workforce.

Given that nearly half of humans in work settings are working for small businesses, it raises the question of whether and how I-O has focused on these workers and the specific nature of their organizations. Although we acknowledge that some studies may have included small businesses and their employees incidentally in efforts to answer broader, organizational-level questions and/or narrower, individual-level questions, we aimed to capture studies specifically focused on small businesses and employees of small businesses, As such, to answer this question, we searched the top 10 academic journals in I-O psychologyFootnote 2 from the SCOPUS (https://scopus.com/sources.uri) and EBSCOhost (https://search.ebscohost.com) academic databases. Article metadata (e.g., author, title, abstract, and year) was manually downloaded and compiled for articles published between 1950 and 2021. Then, we searched article titles and abstracts for the keyword strings: “small?business*,” “small?firm*,” “small?compan*,” or “small?enterprise*” where ‘?’ is a placeholder for any single nonletter character and ‘*’ is a placeholder for any series of zero or more letter characters.Footnote 3 We present the results in Table 1. In examining the results, it is clear that very little attention has been given to such contexts at the highest level of research. The highest proportion of hits was in Personnel Psychology, where 7 of the 1750 articles matched the keyword strings (i.e., 0.40%). In total across the 10 journals, only 36 articles out of 20,899 mentioned small businesses (i.e., 0.17%).

Table 1. Occurrence of Small Business Mentions in Top I-O Psychology Journal Articles

Notes. Multiple occurrences within articles were only counted once. Articles matched are the number of articles matching a search pattern of “small?business*,” “small?firm*,” “small?compan*,” or “small?enterprise*” where ‘?’ is a placeholder for any single nonletter character and ‘*’ is a placeholder for any series of zero or more letter characters.

These patterns raise the question of why I-O psychology research has historically focused on larger companies. We recognize that the entire field of I-O psychology originally emerged to address the optimization of individual performance within a large organization: Munsterberg and his contemporaries’ work was catalyzed by the selection needs of the US Army in the face of World War I (for an overview, see Landy, Reference Landy1992). Though the field has since grown beyond the military context and reached into many, varied business sectors, the most common I-O career paths involve employment with large organizations (e.g., universities, military, and government agencies), either directly and internally to these typically large organizations or indirectly through external consultant work (Zelin et al., Reference Zelin, Lider and Doverspike2015). However, practical challenges have persisted that may prevent I-Os from effectively engaging small businesses, including resource limitations (e.g., funding and small research sample sizes). Though researchers have called for I-O psychology to expand focus on the differences among organizations, and even highlighted small businesses as an avenue for this effort (Schneider & Pulakos, Reference Schneider and Pulakos2022), addressing these practical challenges will be critical for future researchers in this area. Given these reasons, it is understandable that much of I-O research and applied work has focused on large businesses.

Of course, one could argue at this point that I-O research need not focus on small businesses and that the theories and findings produced in current studies are equally applicable to small and large businesses. We argue that this is not the case. There are likely meaningful differences between small and large businesses. Moreover, small businesses and their workers merit intentional and specific focus within academic research. We suggest that these differences constitute not only direct barriers to conducting research but also consequential differences in what topics are given focus research and how findings are applied. To that end, we explore how we expect these differences to emerge in two ways: I-O research conducted on larger organizations might focus on theories that are irrelevant to small businesses, and such research might offer recommendations that are infeasible and impractical with limited resources.

Irrelevant theories

One major contributor to the scientist–practitioner gap is the potential misalignment between the theories and concepts studied in academic papers versus the actual areas of interest among practitioners. For example, Deadrick and Gibson (Reference Deadrick and Gibson2007) analyzed the topics of interest between HR academics and HR professionals. They concluded, “In terms of rank-ordered interest areas… HR Professionals and Academics do not agree on the importance of most topic areas” (p. 134). Specifically, HR professionals were more interested in technical job information such as compliance, compensation and benefits, and managing day-to-day job demands; HR academics were more interested in generalizable research theories such as motivation and individual differences. As noted previously, although we recognize that these constructs (i.e., compensation and motivation) may be related, the specific differences in how these studies were conceptualized and communicated may represent differing interests between the two fields. Van de Ven and Johnson (Reference Van de Ven and Johnson2006) called this a knowledge production problem; that is, researchers often pursue content areas that are not aligned with the interests of practitioners.

This problem is likely to be exacerbated when one considers the alignment between I-O research topics and the day-to-day needs of small business owners and managers. The 2017 Journal of Applied Psychology special issue (Kozlowski et al., Reference Kozlowski, Chen and Salas2017) examined 100 years of studies on the key research topics within I-O: building the workforce (e.g., individual differences, staffing, training), managing the workforce (e.g., motivation, well-being, leadership), managing differences within and between organizations (e.g., diversity, cross-cultural issues), and exiting work (e.g., turnover, career management). However, consider how staffing functions within a small business with a smaller number of potential candidates and limited resources. Rather than constructing a validated, selective recruiting procedure from a large pool of candidates, small businesses are likely to employ colleagues or close contacts whom they can trust with the work. Similarly, consider how career management functions within a small business with only a few jobs available. There may not be a large robust job board of internal openings, and employee development programs are unlikely to have the resources necessary to build skills outside of one’s immediate job demands.

Finally, even theories in areas that would seem to apply in all organizations regardless of their size, such as leadership and teamwork, might function differently in a small organization. Leadership theories, especially in top management teams or executive strategy, often focus on how executives lead large complex organizations (e.g., Marion & Uhl-Bien, Reference Marion and Uhl-Bien2001). However, leading a 100,000-person organization is very different from leading a 10-person organization. The theories of effective leadership advanced in most scholarly journals advocate for behaviors or concepts that simply are not feasible or relevant in small businesses. For example, one approach to leadership emphasizes the density of the leader’s social network (Hoppe & Reinelt, Reference Hoppe and Reinelt2010). Cultivating an “inner network” for the executive is possible when there are thousands of employees and a handful of top management team members; it is not when there are less than 10 employees total and so the entire organization is the executive’s inner network. As another example, much research has been conducted on coordinating across multiteam systems (e.g., DeChurch & Marks, Reference DeChurch and Marks2006). By definition, multiteam systems require two or more teams, often functionally defined teams (e.g., coordinating the Sales team with the IT team). This is not relevant when the entire organization is composed of a few individuals working together on the same team to run every aspect of the organization. With 78.5% of all U.S. businesses having less than 10 employees (Small Business & Entrepreneurship Council, 2019), we argue that the topics studied in I-O academic research are not very relevant to most American businesses.

Impractical recommendations

Even if the theories we study are of interest or are relevant for small businesses, it is far less likely for the practical recommendations provided by many academic manuscripts to be feasible and practical. Again, this is not a new phenomenon, even in large businesses. Vosburgh (Reference Vosburgh2022) reviewed several prior studies showing that only 3% of reviewed HR articles proposed solutions to real-world problems, 42% did not have any implications for practical application, and 1% scored higher on “Applicability” compared to importance and statistical significance on a scale of research utility. In short, many academic articles provide excellent and interesting insight into an important work phenomenon, but they fail to clearly articulate ways that practitioners can apply these insights to their day-to-day jobs.

Again, this gap is likely to be even more apparent with small businesses due in large part to limitations of financial and labor resources. The average small business begins with $10,000 or less in capital investments (Small Business Labs, 2014). Even with the efforts of advocacy organizations, such as SBA’s Office of Advocacy and the National Small Business Association (NSBA), small businesses still have limited resources in terms of finances and employee time. These limitations make it difficult to enact the large, complicated, and expensive “recommendations” that many academic studies propose. Imagine trying to implement the full suite of flex schedules, parental leave, and telecommuting options that are often recommended to maximize employee well-being. How can small businesses do that when there are only a few employees responsible for managing everything there is to do in the business and no excess money to fund backup employees or health and wellness benefits?

These limitations are akin to the limitations discussed in recent articles critiquing I-O’s focus on white-collar employees. Kossek and Lautsch’s (Reference Kossek and Lautsch2018) comprehensive review of work–life balance recommendations argued that most prior studies’ recommendations were only applicable to managerial and professional employees (e.g., higher pay, salaried, white-collar work) and not helpful for lower wage, blue-collar work. For example, despite the value of teleworking, 76% of lower-wage workers had jobs with responsibilities that could not be performed remotely, compared to just 44% of higher wage workers (Parker et al., Reference Parker, Horowitz and Minkin2020). In other words, lower-SES workers disproportionately face the challenge of having jobs with responsibilities that require in-person work (e.g., grocery store checkout clerk, nurse assistant), thus precluding them from common work–life balance initiatives that allow for teleworking or hybrid work. Furthermore, Kossek and Lee (Reference Kossek and Lee2020) described this as clear evidence of how the pandemic has exposed work–life inequalities in the US, especially because the “frontline” jobs designated as “essential” and thus excluded from teleworking options are disproportionately filled by blue-collar workers.

Put together, we posit that a major contributing factor to the scientist–practitioner gap is the misalignment in theory and implications between academic research and small businesses. This can be due to issues such as I-O theories implicitly assuming that businesses are large enough for phenomena to occur or for theories to have an impact on day-to-day management, or practical issues such as recommendations requiring large amounts of financial or labor resources that are not feasible for small businesses. Especially given the prevalence of small businesses in the US, we believe that research has disproportionately favored topics and recommendations that are more applicable to larger businesses that make up less than 1% of the total number of US businesses and a little over half of the US workforce (U. S. Small Business Administration, 2022). As such, this focal article puts up for the debate the primary proposition that I-O research should do more to specifically focus on topics relevant to small businesses and provide recommendations that are feasible for small businesses. Importantly, the purpose of this article is not to prove that specific theories are irrelevant or that specific recommendations are impractical, but rather to demonstrate (through some preliminary evidence) that the scientist–practitioner gap may be exacerbated for small businesses. We hope that commentaries written in response to this focal article will build upon our ideas by either supporting or countering them with alternative evidence, testable hypotheses, and future research.

Preliminary evidence

The two concepts we discussed—irrelevant theories and impractical recommendations—are just two examples of ways we believe that I-O research has overlooked the needs of small businesses. There are certainly many other possible reasons, explanations, causal mechanisms, and influences that could contribute to this, such as the influence of new technology and AI, differences in nomenclature whereby the same concept is called something different by academics versus practitioners, and publication details such as where articles are published and how they can be accessed. We hope that commentaries and future studies can investigate some of these in detail, especially through empirical data collection given the scarcity of empirical data on the scientist–practitioner gap (Bartunek & Rynes, Reference Bartunek and Rynes2014). Here, we provide an example of some empirical data we were able to collect from small business owners and managers that provides some preliminary supporting evidence for I-O researchers to more seriously consider the needs of small businesses. Specifically, we gathered the article abstracts and “practical implications” sections from top journal articles in I-O on leadership and teams, presented them to small business owners and managers, and asked for their reactions using both closed-ended surveys and open-ended text questions.

Methods

We began by downloading all articles published in the Journal of Management, Leadership Quarterly, Journal of Organizational Behavior, Journal of Applied Psychology, and Personnel Psychology between 2016 and 2021 with the keywords “teams” or “leader*” (* as a wildcard in search), saving only the empirical articles. We focused on these topics as they seem to have the most potential to apply to small businesses. Regardless of size, any business will have at least one person “in charge”—in other words, a leader. Moreover, most definitions of a team bound it as an interdependent group of two or more people (e.g., Salas et al., Reference Salas, Cooke and Rosen2008). As such, even small businesses with just a few employees are likely subject to team dynamics; therefore, team research should be relevant to small businesses.

A total of 474 articles were downloaded; of these, we randomly selected 75 articles to be coded and rated. We first carefully read the Methods section of each article to determine the sample size and how many organizations the study sampled from, to determine if there were any indications as to the size of the organizations in the sample. Of the 114 samples reported in the 75 articles, the majority sampled from organizations with over 100 employees (31.58%) or used only student samples from large universities (21.93%). Forty-three samples (37.72%) recruited a large pool of participants employed in a variety of different organizations, but they did not report any metric of organization size or control for it. Thus, these studies ignored the potential effect of organization size. Seven samples (6.14%) included some indication that small businesses were in the sample. For example, one of these seven studies indicated that about 34% of their sample came from companies with 100 employees or fewer, and another indicated that their sample came from one large university, one large company, and four small-to-medium public relations firms (without clarifying “small-to-medium”). Of these seven studies, only one statistically controlled for organization size, stating that the accessibility of supervisors might differ as a function of organization size. A second examined the effect of organization size on the dependent variable and reported a nonsignificant finding across 230 employees nested in 50 teams across 42 organizations. Finally, three samples (2.63%) were recruited from just one or two organizations with small enough sample sizes (i.e., 75 employees in a real estate company, 45 firefighters in a fire department, and 230 team members across two financial service companies) such that the organization might be considered a small business. However, the studies did not report the organization size (e.g., 75 employees in the sample may have been recruited from a large organization depending on response rate) or control for it in any way. Of note, none of the 114 samples were recruited from a business of less than 100 employees. Thus, there is an immediate, apparent lack of focus in top-tier journal articles on ensuring that I-O research adequately explores the unique characteristics and socio-organizational processes of small businesses. Although these studies may include small businesses when recruiting large pools of participants, it seems they are not adequately exploring the impact of organization size on their findings.

We proceeded by extracting the text from the abstract and practical implications sections of each of the 75 articles. We recruited 79 small business owners or managers to serve as SMEs. Participants were recruited via snowball sampling (n = 20) and Prolific (n = 59), with a mean age of 38.29 (SD = 11.04), 48 men (29 women and 2 nonbinary), and 65 White (10 Asian, 2 Black or African American, 1 Hispanic or Latino, and 1 American Indian or Alaskan Native). Eligibility was that participants had to self-identify as working in a small business.Footnote 4 When asked in the survey as to the size of their organization, the average number of full-time employees reported was 90 and the average number of part-time employees reported was 26. Participants were randomly assigned to read the abstract and practical implications of three articles. For each article, they first read the abstract and then answered a few Likert-type questions about the writing quality, degree of interest in the topic, relevance, and how much money their company would pay to access the full article. Next, they read the practical implications and answered a few Likert-type questions about the writing quality, effectiveness of the recommended practices, and ROI. The SMEs were also asked to respond to two open-ended items, one each regarding the article’s abstract and practical implications, to further justify their ratings. The exact questions and the labels used hereafter for each question can be found in Appendix A. Demographic information about the SMEs can be found in Appendix B.

Survey data results

First, we looked at the overall descriptive statistics for each variable of interest across all raters, as shown in Table 2 and Figure 1 below.

Figure 1. Density plot of scores (Ranging From 1 to 5) across all raters on each variable.

Table 2. Descriptive Statistics of Survey Data

Overall, as expected, the survey data ratings were low on a 5-point scale. For example, almost half of SMEs said that the abstract was only “slightly” or “not at all” interesting or helpful. Similarly, less than a third of SMEs said that the recommended practical implications were “very” or “extremely” effective or appropriate for their small business. However, both questions on writing quality were highly rated. Interestingly, the survey data suggest that SMEs responded more positively to the practical implications sections than to the abstracts. Most notably, almost half of SMEs said that there were “few” or “no other” better approaches to addressing the problem identified in the article, compared to the recommended practical implications.

We also tested to see if there were differences in ratings between various article-level variables such as the journal in which it was published, publication year, citation rate (number of citations adjusted for months since publication), and number of authors. Additionally, we extracted several metrics from the abstracts and practical implications sections of each article; these were the number of prepositional phrases in each section, average sentence length, average number of syllables per word, and readability score (using Flesch’s Reading Ease Score).Footnote 5 For both article-level and section-level features, we ran linear regressions (or ANOVAs in the case of journal name) regressing each of the eight survey scores onto the predictor variable, then adjusting for multiple comparisons (Benjamini & Hochberg, Reference Benjamini and Hochberg1995). After adjusting p-values, none of the findings were significant, meaning, SME ratings of each article did not differ as a function of the journal in which it was published, publication year, number of citations, number of authors, or the text features (e.g., readability score, average sentence length, number of prepositional phrases) extracted from article abstract and practical implication sections. For the citation rate and readability score, this finding was surprising. It would have made sense if articles with more citations per year were rated higher, as that would suggest that “better” articles (as defined by the number of citations) are received better by SMEs. The null finding here suggests some misalignment between what academia rates highly (i.e., more citations) and what SMEs in small businesses find to be valuable. Similarly, it would have made sense if more readable articles got better ratings, as that would support the idea that poor communication is a driver of the scientist–practitioner gap. This null finding, combined with the generally more positive ratings on writing quality, suggests that the writing quality or style is not an important factor when SMEs consider the relevance of an academic article.

Finally, we asked SMEs how much money (in US dollars) their company would pay to access and read the full article, giving them a benchmark comparison of $30.00 on average per article (Sporte, 2012). On average, SMEs indicated that they would spend $19.20 to read the article, with 36.40% indicating that they would not spend any money at all on the article. The responses were positively skewed or heavily skewed to the right. Once again, the prices did not differ as a function of the journal in which it was published, publication year, number of citations, or readability score. All data and code are publicly available on the OSF, and we encourage commentary responses to explore the data and find interesting additional analysis options to further investigate these ideas.

NLP analysis of open-ended comments data results

In addition to Likert ratings, SMEs provided open-ended comments for the article abstracts and practical implication sections. They were prompted with “Please provide some optional explanation for why you found this section to be interesting or helpful (or not).” Despite being optional, almost all participants wrote something meaningful. Some example comments are: “Its use of larger, technical language is not only extremely clunky but also ostracizing to those not familiar with the jargon. Though its information might be good, it’s inconsequential if I can barely get through it,” “It’s all academic language that makes no sense to actual humans, so I am not entirely sure what it is about, but it seems to be using $100 words to hide the pretty obvious conclusion,” and “I read this three times and for the life of me I can’t even understand what they’re trying to get at. I don’t see anything here that is valuable or actionable from a business standpoint.” Although mostly negative, there were some positive comments such as “I think it is helpful because it makes me as a leader think about how my leadership style comes across.”

Because text data may provide insights beyond traditional survey data (Hickman et al., Reference Hickman, Thapa, Tay, Cao and Srinivasan2020; Kobayashi et al., Reference Kobayashi, Mol, Berkers, Kismihók and Den Hartog2018b), we conducted several natural language processing (NLP) analyses for supplemental purposes. Specifically, we performed two forms of text classification—sentiment analysis and emotion analysis. Broadly speaking, text classification is a type of NLP task that involves training a machine learning model to automatically categorize text documents into predefined categories or labels (Kobayashi et al., Reference Kobayashi, Mol, Berkers, Kismihók and Den Hartog2018a). When performing the specific type of text classification known as sentiment analysis, classification categories are types of sentiment (e.g., positive, negative, neutral); for emotion analysis, classes are constructed as types of emotions (e.g., sadness, anger, joy). A classification model must first be trained using labeled examples before predicting the categories of new examples. In the current research—for instance—this would involve coding a portion of SME comments into sentiment categories in addition to emotion categories and training a model from scratch. However, by leveraging state-of-the-art NLP models known as transformerFootnote 6 models (see Wolf et al., Reference Wolf, Debut, Sanh, Chaumond, Delangue, Moi, Cistac, Rault, Louf, Funtowicz, Davison, Shleifer, von Platen, Ma, Jernite, Plu, Xu, Scao, Gugger, Drame, Lhoest and Rush2020), which are commonly “pretrained” on tasks such as sentiment and emotion analysis (e.g., Colón-Ruiz & Segura-Bedmar, Reference Colón-Ruiz and Segura-Bedmar2020; Mishev et al., Reference Mishev, Gjorgjevikj, Vodenska, Chitkushev and Trajanov2020; Naseem et al., Reference Naseem, Razzak, Musial and Imran2020; Zhang et al., Reference Zhang, Xu, Thung, Haryono, Lo and Jiang2020), we were able to extract sentiment and emotion scores from SME comments without further training.

Before extracting sentiment and emotion scores, SME comments were augmented by converting the word “it” with “the article.” We performed no additional text preprocessing on SME comments. Positive, neutral, or negative sentiment scores were produced using a pretrained RoBERTa model fine tuned on sentiment-labeled Twitter data (Barbieri et al., Reference Barbieri, Camacho-Collados, Neves and Espinosa-Anke2020). For emotion scores, SME comments were analyzed using a pretrained DistilBERT model fine tuned on the GoEmotions dataset (Demszky et al., Reference Demszky, Movshovitz-Attias, Ko, Cowen, Nemade and Ravi2020)—this analysis resulted in 28 emotion scores (e.g., anger, disapproval, excitement, realization) for each SME comment. Both sentiment and emotion scores were transformed into probabilities adding up to 1.00 for each respective analysis. The 28 emotion scores—for example—will sum to 1.00 for each comment; the three sentiment scores also sum to 1.00.

In addition to sentiment and emotion scores, comments that mentioned articles being confusing, irrelevant, impractical, and not novel were coded into four binary variables (see Table 3). We used relevant keywords (e.g., “gibberish,” “confusing,” “nonsense,” “unnecessary,” and “irrelevant”) to assist in the coding process. Levels of agreement were good for the confusing (ICC1 = 0.78), irrelevant (ICC1 = 0.68), and not novel (ICC1 = 0.71) codes, and not good for the impractical code (ICC1 = 0.29). All coders then met to achieve consensus on all items. Based on these data, about one in five SMEs (20.3%) expressed that articles were confusing, complex, or overly verbose in their comments. Somewhat fewer SMEs (13.1%) expressed that the research topic was not relevant or helpful to their organization. Example comments include “I think small companies don’t need to worry about this stuff yet. People are not suing them to death” and “Most leadership roles in our organization are predetermined. I’m not sure how this information could be used to better identify leaders in the interview process.” However, only 4.3% of SMEs mentioned that the interventions or processes they read would provide little practical value or would cost too much in terms of money or personnel. Example comments include “The information is useful, just not pointing towards a solution” and “I don’t have the money to hire many leaders right now.” Additionally, only 8.3% of SMEs mentioned that the findings were not novel or were too obvious. Example comments include “There is nothing here that any marginally competent manager doesn’t already know” and “Hire coaches to increase team performance is not exactly rocket science or particularly helpful for small businesses with tight budgets.”

Table 3. Most and Least Relevant SME Comments Determined by NLP Sentiment and Emotion Scores

Note. We identified exemplar comments based on the probability score of the NLP model classifying a given comment to the sentiment or emotion labels in column one. In other words, the exemplar comments for caring were most related to the caring emotion as determined by the model.

To evaluate the accuracy of NLP-derived scores, we examined their convergence with Likert ratings provided by SMEs. Given the number of NLP variables (i.e., 3 sentiment scores and 28 emotion scores), we chose to focus on the variables most related to SME Likert ratings. This resulted in seven focal NLP variables: positive sentiment, negative sentiment, and five emotions (i.e., annoying, caring, confusing, disappointment, and embarrassment). As an integrity check, we ranked comments by their value on each of the seven NLP variables. Then, we selected the three most exemplary comments determined by their values on each variable. Here, an exemplary comment is based on the probability that the model thought a comment corresponded to a specific sentiment or emotion. The results of this analysis are provided in Table 3.

Sentiment and emotion scores convergence with Likert ratings

Overall, most of the estimated r coefficients describe a medium effect with SME Likert ratings of the article sections. NLP analyses suggest that comments were relatively aligned with SME ratings. For the unique items SMEs used to rate article abstracts,Footnote 7 lower scores tended to lend themselves to comments expressing more negative sentiment (r interesting = −.45 and r relevance = −.46), confusion (r interesting = −.33 and r relevance = −.39) and annoyance (r interesting = −.38 and r relevance = −.36). When SMES wrote negative comments—for example—it was likely they would find abstracts less related to the business context (r = −.46). Text analysis also revealed less obvious patterns. SME comments were slightly less related to their Likert scores when rating the practical implication sections. This was particularly the case with the ratings taken from the items measuring the appropriateness, effectiveness, and ROI of the practical implications, as effect sizes ranged |r|appropriateness = .22–.33, |r|effectiveness = .19–.34, and |r|ROI = .20–.33 respectively. Trends in Table 4 may suggest that SMEs found the practical implications to be less confusing overall, whereas complex vocabulary and jargon were more impactful in the abstract. When considering the comments that explicitly mentioned the article was confusing, irrelevant, or impractical (i.e., rows 15, 16, and 17 in Table 4), it seems that SMEs are negatively influenced by confusing abstracts (|r| = .30–.34) and irrelevant practical implications (|r| = .36–.44).

Table 4. Correlation Matrix Between SME Likert Ratings and SME Comment NLP Scores

Note. ***p < .001, **p < .01, *p < .05. NLP scores based on comments ordered by average zero-order correlation with Likert rating variables.

a Missing coefficients (NA) are a result of Likert items being unique to abstract or practical implications rating scale (see Appendix A).

Impact of additional factors after controlling for confusion

As scholars have underscored (e.g., Gernsbacher, Reference Gernsbacher2018; Stricker et al., Reference Stricker, Chasiotis, Kerwer and Günther2020; Timming & Macneil, Reference Timming and Macneil2023), to have the greatest impact, researchers should write articles in a way that is easily understood by the public. One could argue that if a person does not understand what they are reading, they cannot accurately determine whether it is irrelevant, impractical, and so forth. With this in mind, we performed several additional analyses to examine whether confusion was the primary factor behind SME ratings. Specifically, we wanted to determine if comments explicitly mentioning factors other than confusion (i.e., irrelevance, impracticality, or a lack of novelty) were also helpful in predicting SME ratings of an article.

To do so, we compared two multilevel models for each of the seven Likert rating scales. We allowed average ratings to vary by person, which is also referred to as a “random intercepts model.” In the first random-intercepts model, we included the binary confusion variable (i.e., one if a comment expressed confusion, otherwise zero) as the only predictor of SME Likert ratings. In the second model, all coded variables were used (i.e., confusion, irrelevance, impracticality, or a lack of novelty). After fitting each model, a chi-square difference test was performed. Here, a significant chi-square difference would indicate that, by adding variables other than confusion, such as irrelevance and impracticality, one can better predict SME Likert ratings. Put simply, significant results would suggest that confusion is not the only determinant of SME ratings.

Indeed, we found that the second model better predicted each of the seven Likert rating scales: abstract interest Δχ 2 (3, N = 77) = 17.062, p < 0.001); abstract relevance Δχ 2 (3, N = 77) = 24.689, p < 0.001); appropriateness of the practical implications Δχ 2 (3, N = 71) = 11.390, p = 0.01); effectiveness of the practical implications Δχ 2 (3, N = 71) = 12.459, p < 0.01); the perceived number of feasible options described in the practical implications Δχ 2 (3, N = 71) = 8.016, p < 0.05); perceived return on investment described in the practical implications Δχ 2 (3, N = 71) = 8.385, p < 0.05); and quality of an article’s abstract and practical implications Δχ 2 (3, N = 77) = 12.589, p < 0.01). Results suggest that researchers focus on producing research that is not only easy to read but that also accounts for relevance, whereas novelty and practicality appear to be slightly less important.

Invitation for debate

We hope that this focal article thus far has illustrated to readers that there is a notable and important challenge whereby academic findings in I-O psychology are not being effectively bridged over or translated to help practitioners working in small businesses. We have argued that this may be due to a historical trend of traditional I-O papers to focus on large businesses—likely due to practical constraints preventing research in small businesses—which has resulted in I-O theories that are not relevant for small businesses and/or recommendations that are not practical due to time, money, resource, and technological limitations in small businesses. Our data present initial evidence illustrating that small business owners do react somewhat negatively to academic journal articles, and some common reasons for this expressed in the open-ended comments are due to the academic articles being confusing or poorly written, irrelevant, impractical, or perceived as not novel (i.e., “it’s just common sense!”). Interestingly, the survey data suggest that SMEs responded more positively to the practical implications sections than to the abstracts, but the strongest negative emotions were elicited when abstracts were confusing and/or when practical implications were irrelevant.

We contend that the issues presented and discussed in this focal article raise important questions as to the generalizability of the studies produced by the field of I-O psychology. Importantly, at no point have we questioned the validity of the results of these articles, the accuracy of the statistical analyses conducted, or the thoroughness of the peer review process. Rather, we are questioning if the research is as important as the articles describe, especially in the context of small business owners and managers. In addition, we ask what can be done to address this potential issue in our field. For example, perhaps there need to be more I-O studies focusing on phenomena relevant specifically to small businesses. Although there may be practical challenges in doing this, there are fields of study on the subject (e.g., the Journal of Small Business Management), which suggests that not only is this possible but there is also room for collaboration between I-O and other fields. Outside of enhancing research, other initiatives could help bridge the scientist–practitioner gap by capitalizing on the value of I-O scholars who have made their careers in translating I-O research for business managers. These individuals could help overcome the “confusing” reaction that our small business owners had to the academic articles. We summarize these recommendations for how to improve (reduce) the scientist–practitioner gap, along with others, in Table 5 below.

Table 5. Themes and Recommendations for Academics When Publishing Research for Small Businesses

a Themes are ordered by the proportion of comments coded to each theme. For example, the largest number of comments were related to Readability, then Relevance, and so forth. Although 8.3% of comments we seen as “not novel” and too obvious, we felt these comments did not provide meaningful insight for future research.

As this is a focal article in an interactive exchange journal, we hope that many unanswered questions or important limitations will be addressed in response commentaries. To kick off the debate, we offer the following potential perspectives that peer commentators can take in response to the ideas and data presented so far.

  1. 1. Is the problem of the scientist–practitioner gap overstated? As with all data, ours has limitations, and perhaps the gap is not as apparent or as wide as we believe it to be. For example, it is possible that the primary reason for negative reactions to these papers has to do with the writing style as opposed to the content of the journal articles; journal articles are not written in a way that is intended to be read by public audiences. Our additional analysis of our data suggests that this is not the sole issue—content (namely, relevance and practicality) is still an issue according to small business owners—but future studies may show different findings. Moreover, even if the primary issue is writing style, this raises a related question for debate: Shouldn’t journal articles be written in a way that encourages people from outside of our immediate discipline to be able to read and interpret our findings effectively?

  2. 2. Does the scientist–practitioner gap even matter? Is it potentially even a good thing to keep the “academic theory” in one place (i.e., peer-reviewed publications) and the “applied practice” in another? Of note, many academics engage with practitioners in consulting and other forms of communication beyond peer-reviewed publications, and legions of I-O practitioners build their careers on translating academic I-O research to solve business problems. Perhaps this is sufficient, and it is acceptable for the academic peer-reviewed I-O publications to be less than helpful to everyday business managers.

  3. 3. Is the focus on small businesses warranted? For example, the low ratings may be because small businesses face unique, context-specific challenges that could not possibly be addressed in larger-scale academic publications. Or perhaps such questions might be answered by continued research that may apply to small businesses and the contexts in which they operate (e.g., leadership, teams, multiteam systems). Thus, might it be meaningless and not worth the effort to make academic publications applicable to small businesses?

  4. 4. What are some potential solutions to improve the practical applicability of the “practical implications” sections? For example, some have suggested that peer-reviewed journal articles always include a practitioner as either an author or reviewer to ensure that findings are translated into tangible next steps for practitioners. What are the pros and cons of a solution like this?

  5. 5. Another suggestion was to only allow “practical implications” sections for meta-analyses or similar articles that can provide more generalizable recommendations drawn from a larger body of research, as opposed to just one study. What are the pros and cons of a solution like this?

  6. 6. How might the “gap” be impacted by the pressures mounted from tenure and promotion criteria? For example, what if tenure committees required faculty members to demonstrate expertise in communicating or applying their research to practitioners? What are the pros and cons of a solution like this?

  7. 7. What about journals? Should we have more “bridge” journals recognized as part of top-tier academic lists? How can we reduce the access costs of journals for practitioners, and should we?

We hope these questions, among many others that are likely to arise from this focal article, are fruitful invitations for debate and honest, thoughtful dialogue. Ultimately, as many prior focal article authors have expressed, we believe that the research conducted in I-O psychology is interesting, impactful, and relevant to the wider population and society at large. Our critique aims to strengthen and improve how we bring our science to the public, especially as it pertains to the 47.5% of the US population employed in small business environments. Overall, we hope that I-O scholars and practitioners alike continue to pay close attention to how we can make our research and science more applicable to the everyday worker.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/iop.2024.11.

Footnotes

1 SBA standards refer to businesses that are for profit, independently owned and operated, nondominant in their field, and physically located and operated in the U.S. and its territories. The number of employees of these businesses is defined as the 12-month average of all employees, including temporary and part-time workers. The average receipts of these businesses are defined as the 3-year average of total or gross income, plus the cost of goods sold.

2 According to Bajwa and König (Reference Bajwa and König2019), listed in alphabetical order.

3 Although the authors felt this search pattern is likely to align with how an individual might search for small business articles in an academic database, a reviewer suggested that the search pattern used may underestimate the number of small business articles published in top I-O journals—readers should be aware of this. Nonetheless, we performed an additional search using a less conservative search pattern that matched abstracts and titles if they included a number less than 500 followed by the words “employee*” or “worker*.” Although this pattern resulted in roughly five times as many matches, none of the additional matches centered around small business.

4 Broadly defined, because different industries were represented and the number of employees defined by the SBA as a “small business” varies by industry.

5 Readability score calculated as: 206.835 − p1.015 × average sentence length] − [84.6 × average word syllables] (see Flesch, Reference Flesch1948).

6 For more information on the specific transformer architectures used in this study see (RoBERTa; Liu et al., Reference Liu, Ott, Goyal, Du, Joshi, Chen, Levy, Lewis, Zettlemoyer and Stoyanov2019; DistilBERT; Sanh et al., Reference Sanh, Debut, Chaumond and Wolf2020).

7 These items were “Rate the extent the article addresses topics that are personally interesting.” and “Rate the extent the findings of this article relevant to one’s business context.”

References

Bajwa, N.u H., & König, C. J. (2019). How much is research in the top journals of industrial/organizational psychology dominated by authors from the US? Scientometrics, 120(3), 11471161. https://doi.org/10.1007/s11192-019-03180-2CrossRefGoogle Scholar
Baldridge, D. C., Floyd, S. W., & Markóczy, L. (2004). Are managers from Mars and academicians from Venus? Toward an understanding of the relationship between academic quality and practical relevance. Strategic Management Journal, 25(11), 10631074. https://doi.org/10.1002/smj.406CrossRefGoogle Scholar
Banks, G. C., Pollack, J. M., Bochantin, J. E., Kirkman, B. L., Whelpley, C. E., & O’Boyle, E. H. (2016). Management’s science-practice gap: A grand challenge for all stakeholders. Academy of Management Journal, 59(6), 22052231. https://doi.org/10.5465/amj.2015.0728CrossRefGoogle Scholar
Barbieri, F., Camacho-Collados, J., Neves, L., & Espinosa-Anke, L. (2020). TweetEval: Unified benchmark and comparative evaluation for tweet classification. ArXiv. http://arxiv.org/abs/2010.12421.Google Scholar
Bartunek, J. M., & Rynes, S. L. (2014). Academics and practitioners are alike and unlike: The paradoxes of academic-practitioner relationships. Journal of Management, 40(5), 11811201. https://doi.org/10.1177/0149206314529160CrossRefGoogle Scholar
Belli, G. Bridging the researcher-practitioner gap: Views from different fields. In: Proceedings of the Eighth International Conference on Teaching Statistics. (ICOTS8. International Statistical Institute. 2010. https://www.stat.auckland.ac.nz/∼iase/publications/icots8/ICOTS8_1D3_BELLI.pdf Google Scholar
Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, 57(1), 289300. https://doi.org/10.1111/j.2517-6161.1995.tb02031.xCrossRefGoogle Scholar
Colón-Ruiz, C., & Segura-Bedmar, I. (2020). Comparing deep learning architectures for sentiment analysis on drug reviews. Journal of Biomedical Informatics, 110, 103539. https://doi.org/10.1016/j.jbi.2020.103539CrossRefGoogle ScholarPubMed
Deadrick, D. L., & Gibson, P. A. (2007). An examination of the research-practice gap in HR: Comparing topics of interest to HR academics and HR professionals. Human Resource Management Review, 17(2), 131139. https://doi.org/10.1016/j.hrmr.2007.03.001CrossRefGoogle Scholar
DeChurch, L. A., & Marks, M. A. (2006). Leadership in multiteam systems. Journal of Applied Psychology, 91(2), 311329. https://doi.org/10.1037/0021-9010.91.2.311CrossRefGoogle ScholarPubMed
Demszky, D., Movshovitz-Attias, D., Ko, J., Cowen, A., Nemade, G., & Ravi, S. (2020). GoEmotions: A dataset of fine-grained emotions (arXiv:2005.00547). arXiv. https://doi.org/10.48550/arXiv.2005.00547Google Scholar
Flesch, R. (1948). A new readability yardstick. Journal of Applied Psychology, 32, 221233. https://doi.org/10.1037/h0057532CrossRefGoogle ScholarPubMed
Gernsbacher, M. A. (2018). Writing empirical articles: Transparency, reproducibility, clarity, and memorability. Advances in Methods and Practices in Psychological Science, 1(3), 403414. https://doi.org/10.1177/2515245918754485CrossRefGoogle ScholarPubMed
Goldstein, C. M., Murray, E. J., Beard, J., Schnoes, A. M., & Wang, M. L. (2020). Science communication in the age of misinformation. Annals of Behavioral Medicine, 54(12), 985990. https://doi.org/10.1093/abm/kaaa088CrossRefGoogle ScholarPubMed
Hickman, L., Thapa, S., Tay, L., Cao, M., & Srinivasan, P. (2020). Text preprocessing for text mining in organizational research: Review and recommendations. Organizational Research Methods, 25(1), 114146. https://doi.org/10.1177/1094428120971683CrossRefGoogle Scholar
Hoppe, B., & Reinelt, C. (2010). Social network analysis and the evaluation of leadership networks. Leadership Quarterly, 21(4), 600619. https://doi.org/10.1016/j.leaqua.2010.06.004CrossRefGoogle Scholar
Kobayashi, V. B., Mol, S. T., Berkers, H. A., Kismihók, G., & Den Hartog, D. N. (2018a). Text mining in organizational research. Organizational Research Methods, 21(3), 733765. https://doi.org/10.1177/1094428117722619CrossRefGoogle ScholarPubMed
Kobayashi, V. B., Mol, S. T., Berkers, H. A., Kismihók, G., & Den Hartog, D. N. (2018b). Text classification for organizational researchers: A tutorial. Organizational Research Methods, 21(3), 766799. https://doi.org/10.1177/1094428117719322CrossRefGoogle ScholarPubMed
Kossek, E. E., & Lautsch, B. A. (2018). Work-life flexibility for whom? Occupational status and work-life inequality in upper, middle, and lower level jobs. Academy of Management Annals, 12(1), 536. https://doi.org/10.5465/annals.2016.0059CrossRefGoogle Scholar
Kossek, E. E., & Lee, K. H. (2020). The coronavirus & work-life inequality: Three evidence-based initiatives to update US work-life employment policies. Behavioral Science & Policy, 6(2), 7785. https://doi.org/10.1353/bsp.2020.0018Google Scholar
Kozlowski, S. W., Chen, G., & Salas, E. (2017). One hundred years of the journal of applied psychology: Background, evolution, and scientific trends. Journal of Applied Psychology, 102(3), 237253. https://doi.org/10.1037/apl0000192CrossRefGoogle ScholarPubMed
Landy, F. J. (1992). Hugo Münsterberg: Victim or visionary? Journal of Applied Psychology, 77(6), 787802. https://doi.org/10.1037/0021-9010.77.6.787CrossRefGoogle Scholar
Lewis, N. A. Jr, & Wai, J. (2021). Communicating what we know and what isn’t so: Science communication in psychology. Perspectives on Psychological Science, 16(6), 12421254. https://doi.org/10.1177/1745691620964062CrossRefGoogle ScholarPubMed
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). RoBERTa: A robustly optimized BERT pretraining approach. ArXiv. http://arxiv.org/abs/1907.11692.Google Scholar
Marion, R., & Uhl-Bien, M. (2001). Leadership in complex organizations. Leadership Quarterly, 12(4), 389418. https://doi.org/10.1016/S1048-9843(01)00092-3CrossRefGoogle Scholar
Mishev, K., Gjorgjevikj, A., Vodenska, I., Chitkushev, L. T., & Trajanov, D. (2020). Evaluation of sentiment analysis in finance: From lexicons to transformers. IEEE Access, 8, 131662131682. https://doi.org/10.1109/ACCESS.2020.3009626CrossRefGoogle Scholar
Naseem, U., Razzak, I., Musial, K., & Imran, M. (2020). Transformer based deep intelligent contextual embedding for Twitter sentiment analysis. Future Generation Computer Systems, 113, 5869. https://doi.org/10.1016/j.future.2020.06.050CrossRefGoogle Scholar
Nicolai, A. T., Schulz, A.-C., & Göbel, M. (2011). Between sweet harmony and a clash of cultures: Does a joint academic-practitioner review reconcile rigor and relevance? Journal of Applied Behavioral Science, 47(1), 5375. https://doi.org/10.1177/0021886310390866CrossRefGoogle Scholar
Parker, K., Horowitz, J. M., & Minkin, R. (2020, December 9). How the coronavirus outbreak has—and hasn’t—changed the way Americans work. Pew Research Center. https://www.pewresearch.org/social-trends/2020/12/09/how-the-coronavirus-outbreak-has-and-hasnt-changed-the-way-americans-work Google Scholar
Rogelberg, S. G., King, E. B., & Alonso, A. (2022). How we can bring I-O psychology science and evidence-based practices to the public. Industrial and Organizational Psychology, 15(2), 259272. https://doi.org/10.1017/iop.2021.142CrossRefGoogle Scholar
Rotolo, C. T., Church, A. H., Adler, S., Smither, J. W., Colquitt, A. L., Shull, A. C., Paul, K. B., & Foster, G. (2018). Putting an end to bad talent management: A call to action for the field of industrial and organizational psychology. Industrial and Organizational Psychology, 11(2), 176219. https://doi.org/10.1017/iop.2018.6CrossRefGoogle Scholar
Salas, E., Cooke, N. J., & Rosen, M. A. (2008). On teams, teamwork, and team performance: Discoveries and developments. Human Factors, 50(3), 540547. https://doi.org/10.1518/001872008X288457CrossRefGoogle ScholarPubMed
Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2020). DistilBERT, a distilled version of BERT: Smaller, faster, cheaper and lighter. ArXiv. http://arxiv.org/abs/1910.01108.Google Scholar
Schneider, B., & Pulakos, E. (2022). Expanding the I-O psychology mindset to organizational success. Industrial and Organizational Psychology, 15(3), 385402. https://doi.org/10.1017/iop.2022.27CrossRefGoogle Scholar
Small Business & Entrepreneurship Council (2019). Facts & data on small business and entrepreneurship. Small Business & Entrepreneurship Council. https://sbecouncil.org/about-us/facts-and-data.Google Scholar
Small Business Labs (2014, October 14). Most small businesses are started with less than $10,000. Small Business Labs. https://www.smallbizlabs.com/2014/10/most-small-businesses-are-started-with-less-than-10000.html.Google Scholar
Sporte (2012). How much does it cost to get a scientific paper?. ScienceBlogs. https://scienceblogs.com/digitalbio/2012/01/09/how-much-does-it-cost-to-get-a.Google Scholar
Stricker, J., Chasiotis, A., Kerwer, M., & Günther, A. (2020). Scientific abstracts and plain language summaries in psychology: A comparison based on readability indices. PLOS ONE, 15(4), e0231160. https://doi.org/10.1371/journal.pone.0231160CrossRefGoogle ScholarPubMed
Timming, A. R., & Macneil, J. (2023). Bridging human resource management theory and practice: Implications for industry-engaged academic research. Human Resource Management Journal, 33(3), 592605. https://doi.org/10.1111/1748-8583.12523CrossRefGoogle Scholar
U. S. Small Business Administration (2016). Table of small business size standards matched to North American industry classification system codes. U. S. Small Business Administration. https://www.sba.gov/sites/default/files/files/Size_Standards_Table.pdf.Google Scholar
U. S. Small Business Administration (2022). Small business profile. SBA Office of Advocacy. https://advocacy.sba.gov/wp-content/uploads/2022/08/Small-Business-Economic-Profile-US.pdf Google Scholar
Van de Ven, A. H., & Johnson, P. E. (2006). Knowledge for theory and practice. Academy of Management Review, 31(4), 802821. https://doi.org/10.5465/amr.2006.22527385CrossRefGoogle Scholar
Vosburgh, R. M. (2022). Closing the scientist-practitioner gap: Research must answer the, SO WHAT, question. Human Resource Management Review, 32(1), 100633. https://doi.org/10.1016/j.hrmr.2017.11.006CrossRefGoogle Scholar
White, J. C., Ravid, D. M., Siderits, I. O., & Behrend, T. S. (2022). An urgent call for I-O psychologists to produce timelier technology research. Industrial and Organizational Psychology, 15(3), 441459. https://doi.org/10.1017/iop.2022.26CrossRefGoogle Scholar
Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., von Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Scao, T. L., Gugger, S., Drame, M., Lhoest, Q., & Rush, A. M. (2020). HuggingFace’s Transformers: State-of-the-art natural language processing. ArXiv. http://arxiv.org/abs/1910.03771.Google Scholar
Zelin, A., Lider, M., & Doverspike, D. (2015). SIOP career study executive report. Society for Industrial and Organizational Psychology. https://www.siop.org/Portals/84/PDFs/Professionals/SIOP_Careers_Study_Executive_Report_FINAL-Revised_031116.pdf Google Scholar
Zhang, T., Xu, B., Thung, F., Haryono, S. A., Lo, D., & Jiang, L. Sentiment analysis for software engineering: How far can pre-trained transformer models go? In: 2020 IEEE International conference on software maintenance and evolution (ICSME), 2020, 7080. https://doi.org/10.1109/ICSME46990.2020.0001CrossRefGoogle Scholar
Figure 0

Table 1. Occurrence of Small Business Mentions in Top I-O Psychology Journal Articles

Figure 1

Figure 1. Density plot of scores (Ranging From 1 to 5) across all raters on each variable.

Figure 2

Table 2. Descriptive Statistics of Survey Data

Figure 3

Table 3. Most and Least Relevant SME Comments Determined by NLP Sentiment and Emotion Scores

Figure 4

Table 4. Correlation Matrix Between SME Likert Ratings and SME Comment NLP Scores

Figure 5

Table 5. Themes and Recommendations for Academics When Publishing Research for Small Businesses

Supplementary material: File

Zhou et al. supplementary material

Appendix A

Download Zhou et al. supplementary material(File)
File 35.4 KB