Hostname: page-component-78c5997874-mlc7c Total loading time: 0 Render date: 2024-11-19T12:16:21.724Z Has data issue: false hasContentIssue false

Expanding the boundaries of interdisciplinary field: Contribution of Network Science journal to the development of network science

Published online by Cambridge University Press:  20 January 2023

Valentina V. Kuskova*
Affiliation:
Lucy Family Institute for Data & Society, University of Notre Dame, 1220 Waterway Blvd, #H248, Indianapolis, IN 46202, USA
Dmitry G. Zaytsev
Affiliation:
University of Notre Dame at Tantur, Jerusalem, Israel
Gregory S. Khvatsky
Affiliation:
Lucy Family Institute for Data & Society, Department of Computer Science and Engineering, University of Notre Dame, 384E Nieuwland Science Hall, Notre Dame, IN 46556 USA
Anna A. Sokol
Affiliation:
Lucy Family Institute for Data & Society, Department of Computer Science and Engineering, University of Notre Dame, 384E Nieuwland Science Hall, Notre Dame, IN 46556 USA
Maria D. Vorobeva
Affiliation:
Faculty of Economics and Business, KU Leuven, Leuven, Belgium
Rustam A. Kamalov
Affiliation:
Independent Researcher
*
*Corresponding author. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

In this paper, we examine the contribution of Network Science journal to the network science discipline. We do so from two perspectives. First, expanding the existing taxonomy of article contribution, we examine trends in theory testing, theory building, and new method development within the journal’s articles. We find that the journal demands a high level of theoretical contribution and methodological rigor. High levels of theoretical and methodological contribution become significant predictors of article citation rates. Second, we look at the composition of the studies in Network Science and determine that the journal has already established a solid “hard core” for the new discipline.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press

1. Introduction

What are the indicators of journal quality? Even though the quality of research has always been associated with perceived quality of journals in which it was published (Hull & Wright, Reference Hull and Wright1990), this question is becoming increasingly important (Mingers & Yang, Reference Mingers and Yang2017) with the explosion of open-access publishing (Butler, Reference Butler2013). One of the most widely used indicators, the journal impact factor, has always been under scrutiny for its ability to establish the quality of individual research papers and the journal itself (e.g., Saha et al., Reference Saha, Saint and Christakis2003; Simons, Reference Simons2008; Jawad, Reference Jawad2020; Lariviere & Sugimoto, Reference Lariviere and Sugimoto2019; Law & Leung, Reference Law and Leung2020). This is especially true for newer journals (Simons, Reference Simons2008), when researchers are left with nothing but a publisher’s reputation to rely on (e.g., Allen & Heath, Reference Allen and Heath2013; Gabbidon et al., Reference Gabbidon, Higgins and Martin2010). Despite newer comparative studies evaluating various quality indicators (e.g., Mingers & Yang, Reference Mingers and Yang2017; Chavarro et al., Reference Chavarro, Ràfols and Tang2018; Ahmad et al., Reference Ahmad, Sohail, Waris, Abdel-Magid, Pattukuthu and Azad2019) and development of new indices of quality assessment (e.g., Zhang et al., Reference Zhang, Qian, Zhang, Zhang and Zhu2019; Xie et al., Reference Xie, Wu and Li2019), the question remains: how does a journal establish its reputation?

In this study, we attempt to answer this important question for one of the newest, and a priori set as flagship, journals for network research studies—Network Science. Having published its inaugural issue in 2013, it is still establishing itself as a premier research outlet. While its objective citation-based indicators place it in the first quartile of target disciplines (such as sociology and political scienceFootnote 1 ), it is yet to reach the rank of a “top field” journal. Given that many journals do not rise up the ranking after a certain point, does Network Science have what it takes to do so?

Is it too early to tell? We do not think so. Network Science has already issued eight full volumes. Most published articles (with exception of the very new) are fully indexed and are being actively cited. The journal became popular among “big name” academics: out of over 190 articles published through 2020, over half (51%) were authored by at least one scholar with H-index over 20; 69% of articles—by authors who have an i10-Index.Footnote 2 Articles were published across almost twenty very different disciplines, from medicine, to management and economics, to sociology, sports, and political science. Methodological articles, offering new methods, can be universally applied to any discipline. Many of them substantially advance network analytics and have a lot of potential to influence many fields of science. In other words, there appears to be enough evidence that the journal is well on its way to establishing the “canon of research” (Brandes et al., Reference Brandes, Robins, McCranie and Wasserman2013:12) in network science. What is unclear is the exact ways in which its impact is the most pronounced—and it certainly cannot be measured by the journal’s impact factor alone.

The role of journals in science has been conceptualized in a variety of ways (Borgman, Reference Borgman1990). Of course, the most important role of any research outlet is communication, or the exchange of research results. However, journals play a role in individual careers of scientists (e.g., Petersen & Penner, Reference Petersen and Penner2014; Raff et al., Reference Raff, Johnson and Walter2008), since publishing papers in high-impact journals as seen as “crucial” for obtaining professional recognition in most fields (Van Raan, Reference Van Raan2014:24). Most importantly, journals are attributed with development of science in many fields (e.g., Marshakova-Shaikevich, Reference Marshakova-Shaikevich1996; Verbeek et al., Reference Verbeek, Debackere, Luwel and Zimmermann2002; Mingers & Leydesdorff, Reference Mingers and Leydesdorff2015).

While the many roles of journals are not disputed, what differs substantially, however, is the measurement of this role. The “science of science” studies provide an ample review of the historical accounts of scientific attempts to “measure things” (Van Raan, Reference Van Raan2014:21). A review by Fortunato et al. (Reference Fortunato, Bergstrom, Börner, Evans, Helbing, Milojević and Barabási2018) has provided an overview of both the trends in science development and the trends in studying this development. It has the central idea that a “deeper understanding of the factors behind successful science” (Fortunato et al., Reference Fortunato, Bergstrom, Börner, Evans, Helbing, Milojević and Barabási2018:1) may allow us to enhance the science’s prospects of effectively addressing societal problems. So apparently, the “science of science” is an important field in its own right. Moreover, with respect to “science of science,” Fortunato et al. (Reference Fortunato, Bergstrom, Börner, Evans, Helbing, Milojević and Barabási2018) advocated for integration of citation-based metrics with alternative indicators.

Based on the extensive literature review and substantial work done by others before us, we have found two such “alternative indicators” that go beyond the journal impact factor and might shed light on the question we are asking: what are the indicators of journal quality? Not surprisingly, one answer is the quality of research, which is judged by scientists themselves. There are at least two approaches to measuring this quality.

One way for research to make a substantial contribution is by enhancing our theoretical understanding of the studied phenomena—the role that is difficult to overstate (Colquitt & Zapata-Phelan, Reference Colquitt and Zapata-Phelan2007). There are multiple ways in which journals fulfill this role, among them publishing: research at the intersection of disciplines or fields (Zahra & Newey, Reference Zahra and Newey2009); studies that bridge the quantitative–qualitative divide (Shah & Corley, Reference Shah and Corley2006); studies that have practical significance (Corley & Gioia, Reference Corley and Gioia2011); studies that utilize advanced methodology (e.g., Carroll & Swathman, Reference Carroll and Swatman2000; Levy, Reference Levy2003); or studies that have higher societal impact (Bornmann, Reference Bornmann2013). Theory building by a journal cannot be currently evaluated by some measure or a score; a more substantive, qualitative analysis of research published in a journal is usually necessary for doing so.

Another is the pattern of scientists’ references to each other’s work. Methods of evaluating this type have been in use for a long time, from co-citation analysis and citation patterns (since about 1970s, e.g. Small, Reference Small1973), co-word analysis (since about 1980s, e.g., Callon et al., Reference Callon, Courtial, Turner and Bauin1983), and different variants of bibliometric analysis (since about 1990s, e.g., Peritz, Reference Peritz1983). There are too many to list, but recent years have seen substantial advances in statistical analysis of research quality. Quite fitting for Network Science are the methodological advances in network approaches to citation analysis (e.g., Maltseva & Batagelj, Reference Maltseva and Batagelj2020; Caloffi et al. Reference Caloffi, Lazzeretti and Sedita2018; Van Eck & Waltman, Reference Van Eck and Waltman2014; Mane & Börner, Reference Mane and Börner2004), which allow to evaluate how co-citations create “islands of research” or separate scientific subfields. Network Science, bridging together research streams from different fields, could have certainly developed its own islands, quite distinct from traditional disciplines. Over time, this confirmation of research quality and the pattern of co-citation evolves into a contribution that can be summarized as developing the “scientific research program” (Lakatos, Reference Lakatos1968).

So, with these two perspectives at hand, the goal of this study is to answer a question of the role that Network Science, a relatively new journal, plays in developing the field of network science, and implicitly, establishing itself as a premier, high-quality journal. First, we qualitatively examine the journal’s contribution to the field by evaluating the theoretical and methodological contribution of Network Science articles and determine whether any obtained characteristics have an impact on the article’s citation rates. Second, we examine the keywords of the published studies to assess the presence of research islands or topics that may indicate whether the journal has started establishing the “hard core” (Lakatos, Reference Lakatos1968) of network science.

In the next section, we establish the theoretical foundation for our analysis. It is followed by the description of our data set and the methods used to analyze it, in line with the two approaches that we use to reach the study’s objectives. We then focus on results of the study and conclude the paper with a discussion of the obtained findings and potential key directions for future research.

2. Advancing sciences: Theoretical and methodological contribution, article impact as a measure of journal impact, and scientific research program

While the exact meaning of “science” is probably a question for philosophy of science, a hundred-years old definition given by Dewey (Reference Dewey1910) appears to summarize the essence of it for our purposes: science is not ready-made knowledge, it is “an effective method of inquiry into any subject matter” (Dewey, Reference Dewey1910:124). Subject matter is, essentially, content for theory (Biglan, Reference Biglan1973), and debates of what is more important for development of science—theory or method—are clearly outside the scope of this article. Instead, we propose a way to evaluate both the theoretical and the methodological contribution of an article simultaneously and independently of one another, to examine the impact of both in articles of Network Science. We also look at whether these articles have formed the “hard core” of new science, promoted by the journal.

2.1 Theoretical contribution

When it comes to evaluating a study’s theoretical contribution, the network science field faces a special challenge, not typical for other sciences. Some scholars argue that being truly interdisciplinary, it lacks a unifying theory. Instead, network concepts, such as centrality, community, and connectivity, serve as unifying features of multiple theories from different fields (e.g., Patty & Penn, Reference Patty, Penn, Victor, Montgomery and Lubell2017). Others claim that network theory became “one of the most visible theoretical frameworks” that could be almost universally applied to any field (Havlin et al., 2010), but without specifying precisely what “network theory” is. Yet others, such as Borgatti & Halgin (Reference Borgatti and Halgin2011), go into great lengths to delineate network theory from theory of networks, and to separate flow models from bond models. Such difference of opinions is too drastic for the truth to “lie in between.” Much more likely, it appears, is the fact that for some fields, using network concepts to ground theoretical arguments constitutes theory development, while for others, “network theorizing” (Borgatti & Halgin, Reference Borgatti and Halgin2011:1170) is required in order for the contribution to be sufficient.

However, to evaluate an article’s theoretical contribution, it is impossible to examine every field to determine the appropriate “theoretical practices.” What we need is an almost-universally understood and accepted “measuring instrument” that could assess an article’s theoretical contribution without going into the field specifics.

It turns out such a tool does exist. Colquitt & Zapata-Phelan (Reference Colquitt and Zapata-Phelan2007) proposed a taxonomy that allows evaluating an article’s theoretical rigor along two dimensions: theory-testing (hypothetico-deductive reasoning) and theory building (inductive reasoning). The taxonomy was developed using many theoretical and empirical writings in the field of management and philosophy of science. As a result, it incorporated an entire spectrum of theoretical contributions that empirical articles could make.

This taxonomy almost universally applies to studies published in Network Science (with just a few exceptions, described in the Method section), because it fits the universal canons of theoretical measurement (Eran, Reference Eran2017). In almost every discipline, there are existing theories that serve as foundation for current “inquiry into any subject matter” (Dewey, Reference Dewey1910:124). Moreover, as Wasserman & Faust (Reference Wasserman and Faust1994) put it, methods of network analysis were developed to test specific hypotheses about the network structure, creating a “symbiosis” (Wasserman & Faust, Reference Wasserman and Faust1994:4) of theory and method. Therefore, network analysis is grounded in both theory and application (Wasserman & Faust, Reference Wasserman and Faust1994), regardless of the field of study.

In their study, Colquitt & Zapata-Phelan (Reference Colquitt and Zapata-Phelan2007) created a typology of the theoretical contribution of an article along two dimensions, which are summarized in Table 1. The first dimension is the “testing theory” dimension—an evaluation of an extent to which existing theories are used as grounding for the study’s hypotheses. Importance of testing existing theories cannot be overemphasized, as testability is one of the major characteristics of a scientific theory (Simon & Groen, Reference Simon and Groen1973; Simon, Reference Simon1994).

Table 1. Colquitt & Zapata-Phelan (Reference Colquitt and Zapata-Phelan2007) taxonomy

The second dimension is “theory building,” expressed as either supplementing existing theory or introducing components that serve as new theory foundations. We refer the reader to the original article for the detailed description of each dimension, but want to emphasize one important point. This typology allows for further classification of articles. For example, in the original study, Colquitt & Zapata-Phelan (Reference Colquitt and Zapata-Phelan2007) divided all examined articles into “reporters” (low on theory-building and theory-testing dimension), “qualifiers” (medium on both), “builders” (low on testing, high on building), “testers” (high on testing, low on building), and “expanders” (high on both dimensions). They have further examined the trends and the impact of the analyzed articles and found that higher levels of theory-building and theory testing are associated with higher article citation rates. So beyond understanding the level of theoretical contribution, this typology allows for further analysis of the article’s impact.

For most of the articles with a reference to an anchoring theory, estimating the “testing theory” dimension is straightforward. The “building theory” dimension is usually also quite apparent, as authors typically provide the indications of available theoretical gaps and the ways in which they fill them and advance the new knowledge. Therefore, this typology is appropriate for almost any article published in Network Science. The possible exceptions are review articles and introductions to special issues, purely methodological articles that describe formal mathematical extensions of existing methods, etc. We describe the articles that could not be evaluated using this taxonomy in more detail in the method section.

2.2 Methodological contribution

Almost any discipline is a combination of theory and method, and network science is no exception. Moreover, as Barabási (Reference Barabási2016) asserted, not only the subject determines network science but also the methodology. Method is important, even essential, to development of theory in network science. As Wasserman & Faust (Reference Wasserman and Faust1994) stated, social network methods have developed “to an extent unequaled in most other social disciplines,” becoming an “integral part” of advancing social sciences theory, empirical research, and formal mathematics and statistics (Wasserman & Faust, Reference Wasserman and Faust1994:3).

Evaluating the article’s impact, and as a result, the journal’s impact on the field, is impossible without assessing the contribution of methods used. Given that currently, there is no tool for evaluating methodological contribution (Judge et al., Reference Judge, Cable, Colbert and Rynes2007), we have developed our own scale—in essence, a third dimension to Colquitt & Zapata-Phelan’s taxonomy. To create a continuum of method with ordinal anchors, we relied on several popular network science textbooks, starting with the classical text of Wasserman & Faust (Reference Wasserman and Faust1994). Perhaps, it is true to all methods, but in network analysis, books progress “from simple to complex” (Wasserman & Faust, Reference Wasserman and Faust1994:23). Therefore, using the structure of classical texts, we have derived the progressive scale of methodological contribution.

No comprehensive book or guide on all available network methods currently exists, to the best of our knowledge. This was the gap first noticed by Wasserman & Faust (Reference Wasserman and Faust1994), in their first seminal text. That book, however, has since become dated with respect to newest methods (Barabási, Reference Barabási2016), so it alone could not be used as a guide. Currently, methodological texts are available in one of two formats: general interest books or introductory textbooks, which contain the basics of network analysis, and more advanced collections, offered as “lecture notes” on special topics. Regardless of the type, books have one thing in common: they describe rigorously tested, well-known approaches to network analysis, and usually lag the field’s development because of long publishing times (e.g., Greco, Reference Greco2013; Clark & Phillips, Reference Clark and Phillips2014). New, previously unknown methods get community-tested via github (e.g., Cosentino et al., Reference Cosentino, Izquierdo and Cabot2017; Kalliamvakou et al., Reference Kalliamvakou, Gousios, Blincoe, Singer, German and Damian2014; Lima et al., Reference Lima, Rossi and Musolesi2014) or other similar online aggregators (Begel et al., Reference Begel, Bosch and Storey2013; Vasilescu et al., Reference Vasilescu, Serebrenik, Devanbu and Filkov2014); introduced via open-source software such as R packages (e.g., Zhao & Wei, Reference Zhao and Wei2017; Ortu et al., Reference Ortu, Hall, Marchesi, Tonelli, Bowes and Destefanis2018); or, though with delays due to the length of peer review process, get published as journal articles (Galiani & Gálvez, Reference Galiani and Gálvez2017). To this extent, the Network Science journal plays an especially important role: this is one of the main outlets for pioneering methodological research.

To create the scale of methodological contribution, we considered this peculiarity of the field’s development, relying on different books for anchoring the scale and leaving room for methods developed outside the traditional venues. We have used the following texts to derive the ordinal methodological contribution scale: Wasserman & Faust (Reference Wasserman and Faust1994), Brandes & Erlebach (Reference Brandes and Erlebach2005), Carrington et al. (Reference Carrington, Scott and Wasserman2005), Easley & Kleinberg (Reference Easley and Kleinberg2010), Scott & Carrington (Reference Scott and Carrington2011), Kadushin (Reference Kadushin2012), De Nooy et al., (Reference De Nooy, Mrvar and Batagelj2018), Newman (Reference Newman2018). There are many books devoted to network methods, and we do not claim that we selected a full set. These books appear to have made a substantial impact on academia as measured by citation rates (with over 1,000 citations each; most have substantially more). Moreover, they represent a range of books in the field, from purely methodological to devoted to specific applications.

In each of these books, methods progressed from simple to complex. This progression was very succinctly summarized in the description to the Newman (Reference Newman2018) “Networks” book: “…Provides a solid foundation, and builds on this progressively with each chapter.2FFootnote 3 ” We found this to be true for all books we used in our review. Relying on them as a guide, we have generated the anchors and the points on the proposed methodological scale. We have considered data requirements, computational efforts, training, and understanding required to execute certain methods, and availability of software. Resulting scale, with examples from examined books, is presented in Table 2.

Table 2. Scale of methodological contribution with examples from classical network analysis books

The first two points on the methodological scale represent low levels of network methods contribution. The first point is for methods that are limited to network visualizations, descriptive statistics (numbers of nodes and ties, etc.), and indices (e.g., density, reachability, connectivity, geodesic, eccentricity, diameter, etc.). These are very simple network measures that some books refer to as “fundamentals,” and they are usually the first topics covered in introductory network courses. Given advances in network analysis software, index calculations, and visualizations require minimum effort. These metrics and measures can be provided for any network data, so data also pose minimum requirements. Studies with this level of network-analytic difficulty are usually those where network analysis methods are secondary to some other statistical approach to hypothesis testing (e.g., Bellavitis et al., Reference Bellavitis, Filatotchev and Souitaris2017). While network analysis plays a role in the study’s structure, this role is minimal.

The second point on the scale presents basic network models. Among them, for example, are models of social influence, where network indices may be calculated first, and then used as predictors in multiple regression models (e.g., Singh & Delios, Reference Singh and Delios2017). Also in this category are models that examine community structure (e.g., Smirnov & Thurner, Reference Smirnov and Thurner2017; Smirnov, Reference Smirnov2019); subgroup and affiliation networks; and applications of basic network models to a variety of problems in diverse fields. Computational requirements are usually relatively low, and data are either secondary or can be obtained from social networks and general surveys. Advanced training in formal mathematics of networks for executing these models is usually not required.

The third and fourth points on the scale provide medium levels of methodological contribution. They either introduce new metrics (such as new centrality indices) or suggest improvements to existing methods. New metrics usually find their way into books eventually (e.g., Brandes & Erlebach, Reference Brandes and Erlebach2005; Carrington et al., Reference Carrington, Scott and Wasserman2005). Improvements to existing methods, usually incremental, are published almost exclusively as journal articles, until they become substantially tested by the community and become standard (e.g., Yaveroglu et al., Reference Yaveroglu, Fitzhugh, Kurant, Markopoulou, Butts and Przulj2014; Goodreau, Reference Goodreau2007). Studies using these methods require either advanced understanding of network analysis or substantial computational effort. However, their methodological impact, though important, is not confirmed until substantial testing takes place.

The fifth through the seventh points represent high levels of methodological contribution. The fifth point corresponds to fundamental statistical models, advanced network structure detection models, blockmodeling, and similar approaches. Executing such models requires substantial knowledge and understanding of network-analytic methods. Data usually pose serious requirements, including organization and clean-up. These models also provide nontrivial insights and are designed to test more fundamental research questions. Temporal network analysis is one point higher on the scale because of additional data requirements (longitudinal collection) and higher computational requirements of such approaches as TERGM and SIENA. Special models, which use advanced application of complex models to community detection, epidemiological settings, etc., occupy the highest point on the scale. All methods within the “high contribution” group require substantial training in network analysis, higher computational efforts, and more restrictive data.

However, we found it to be highly unlikely that studies using network methods utilize only one of the above-mentioned approaches. Usually, articles contain network description and visualization, basic network indices, and then model(s) of various difficulty. Because the points on the scale are not mutually exclusive, for actual computation of the article’s methodological contribution we propose a simple sum of all points present in the article. The resulting contribution can vary from 0 (though unlikely for a network-related study) to 28 (though also unlikely given the usual article length). Resulting scale is also ordinal, and we applied it to all examined articles to separate studies with a more advanced approach to analysis from those with lower levels of methodological impact.

2.3 Article impact as measure of journal impact

Citations of research articles are increasingly used to judge the quality and status of academic journals (Judge et al., Reference Judge, Cable, Colbert and Rynes2007). There are some debates about legitimacy of citations as evaluation tool, which started in the early days of scientometrics and are still ongoing (e.g., Ewing, Reference Ewing1966; Garfield, Reference Garfield1979; Lindsey, Reference Lindsey1989; MacRoberts & MacRoberts, Reference MacRoberts and MacRoberts1989; Onodera & Yoshikane, Reference Onodera and Yoshikane2015). While there are some objections to using citations as measures of quality, the alternatives are rather limited (e.g., Phelan, Reference Phelan1999; Jarwal et al., Reference Jarwal, Brion and King2009). Since citation analysis is “here to stay,” apparently (Lister & Box, Reference Lister and Box2009), the next question is what causes the article to be cited.

There are many studies on this topic in diverse disciplines: management (e.g., Judge et al., Reference Judge, Cable, Colbert and Rynes2007), medicine (e.g., Annalingam et al., Reference Annalingam, Damayanthi, Jayawardena and Ranasinghe2014), law (e.g., Ayres & Vars, Reference Ayres and Vars2000), economics (e.g., Johnson, Reference Johnson1997), biotechnology (e.g., Petruzzelli et al., Reference Petruzzelli, Rotolo and Albino2015), to name a few. Despite very different academic cultures, there appears to be a consensus on some of the characteristics that increase article citation rates in any field, and they are divided along two philosophical dimensions—particularistic or universalistic.

Particularistic, or social constructivist perspective (Baldi, Reference Baldi1998), suggests that the source of scientific contribution—the author—may be more important than the merits of the published paper. There is evidence suggesting that citing “big name” academics may count more towards academic promotions (e.g. Austin, Reference Austin1993) and that reputation of authors is important for success of empirical papers (e.g., Mialon, Reference Mialon2010). This “big name” perspective is still thriving in academia, and publication decisions and citations may still be attributable to the personal status of a writer, not the merits of research per se (Judge et al., Reference Judge, Cable, Colbert and Rynes2007).

Particularistic attributes include the author’s past high productivity (usually measured by an H-index or a similar attribute), which may be perceived to enhance the citing authors’ legitimacy (Mitra, Reference Mitra1970) and the higher prestige of a scientist’s university, which may implicitly enhance the author’s respectability (e.g., Helmreich et al., Reference Helmreich, Spence, Beane, Lucker and Matthews1980). The author’s gender, a personal attribute that should be irrelevant to scientific contribution (Bedeian & Field, Reference Bedeian and Feild1980), also appears to have an effect. This effect differs between studies, however: some find that female authors get fewer citations than males (e.g., Cole, Reference Cole1979; Rossiter, Reference Rossiter1993), while others find the exactly opposite effect (e.g., Ayres & Vars, Reference Ayres and Vars2000).

Universalistic perspective places value on scientific progress and attributes an article’s success to its contribution to the development of science (Cole & Cole, Reference Cole and Cole1974). Who wrote the paper is irrelevant; what matters is the article’s solid theoretical foundation and high-quality execution (Judge et al., Reference Judge, Cable, Colbert and Rynes2007). Previous research cites several study characteristics among universalistic predictors of article’s success: refinement of existing theories (Cole & Cole, Reference Cole and Cole1974), extensions of new theories (Beyer et al., Reference Beyer, Chanove and Fox1995), exploration of new paradigms (Newman & Cooper, Reference Newman and Cooper1993), and high quality of a study’s methods (Gottfredson, Reference Gottfredson1978; Shadish et al., Reference Shadish, Tolliver, Gray and Sen Gupta1995).

Given that article citations are widely seen as a measure of a journal’s quality, examining citations of Network Science articles is a reasonable approach for evaluating its contribution. In line with the universalistic perspective, Colquitt & Zapata-Phelan’s (Reference Colquitt and Zapata-Phelan2007) framework, together with scale of methodological rigor, provides the measures of universalistic article quality attributes. Particularistic attributes, given the tendencies for their influence on citations, could be used as controls to account for the variance they explain in citation rates.

2.4 Scientific research program

Lakatos (Reference Lakatos1968) characterized all scientific research programs by their “hard core” (Lakatos, Reference Lakatos1968)—the generally accepted assumptions, tenets, and axioms—the foundation of the discipline; and a “protective belt” of auxiliary hypotheses, which get tested with the core assumptions intact. Network science is no exception. There are certain generally accepted principles of network science “core,” stated and restated in every foundational network science text.

In the editorial to the inaugural issue of Network Science, Brandes et al. (Reference Brandes, Robins, McCranie and Wasserman2013) have set a high goal for the journal—to “excel above and beyond disciplinary boundaries” (Brandes et al., Reference Brandes, Robins, McCranie and Wasserman2013:2). The editors have clearly articulated their vision for the networks field, delineating what they saw as “network theory” and what data constituted “network data.” They further called network science “the emerging science” (Brandes et al., Reference Brandes, Robins, McCranie and Wasserman2013:12), not tied to any field—transcending disciplinary boundaries, an evolving network itself. The role of Network Science journal, then, was to develop “a canon of research”—an outlet for “the most promising and widest-reaching” papers (Brandes et al., Reference Brandes, Robins, McCranie and Wasserman2013:12)—which pushed forward this new science.

Perhaps, then, the principles of network science “core” are summarized most succinctly by Brandes et al. (Reference Brandes, Robins, McCranie and Wasserman2013) and defined as “common grounds” for the field: it is all aspects of study of “relational data,” from collection to analysis, interpretation, and presentation (Brandes et al., Reference Brandes, Robins, McCranie and Wasserman2013:2). The work in the network science field, then, is based on this assumption that somehow, from this relational nature of data, we can extract additional meaning not available otherwise. Having stated that these relational data form the core of network science, Brandes et al. (Reference Brandes, Robins, McCranie and Wasserman2013) nonetheless ask the question of whether such a unity of a foundation does exist. As they say, if it does, then there is a promise of a new scientific discipline, in line with the characteristic of Lakatos (Reference Lakatos1968).

Attempts to examine the intellectual structure of network science have been made previously, mostly with bibliometric analysis tools (but also with network methods, of course). Most studies, to the best of our knowledge, have been somewhat limited in scope and focused on a narrow aspect of network science, such as social media research (e.g., Coursaris & Van Osch, Reference Coursaris and Van Osch2014) or network communities (e.g., Khan & Niazi, Reference Khan and Niazi2017). Perhaps, being an interdisciplinary field, network science is too large of a field to examine in a single study, and therefore such study has not been attempted yet. The question before us, however, is whether the network science “core” has been established by the Network Science journal. Recent advances in bibliometric network analysis (e.g., review by Cobo et al., Reference Cobo, López-Herrera, Herrera-Viedma and Herrera2011) allow us to do so, with potential answers to a question of what constitutes the “core” topics, and where lies the “protective belt” (Lakatos, Reference Lakatos1968).

3. Data and methods

3.1 The dataset

The data set consisted of all Network Science articles published in 2014–2020.Footnote 4 We used 2014 as the starting year of the data collection because it was the year when the journal was indexed in the Web of Science (WoS) Core Collection database. The initial data set contained 195 publications. We excluded papers that were book reviews, since they did not present results of original research, creating a sample of 183 articles. In this set, there were three references to addenda or errata, which we have also removed. Final data set consisted of 180 articles, and the same set was used for both parts of the analysis.

3.2 Measures

Dependent variable. The dependent variable in the study was the number of citations, the article has received up until the point of data collection. Data for this variable were downloaded directly from the WoS database at the time of data collection.

Independent variables. Independent variables were the article’s theoretical contribution variables (“Testing theory” and “Building theory”) and methodological contribution score (“Method”), which we coded manually for each of the articles. To evaluate the articles’ theoretical and methodological contribution, we downloaded the texts of articles in our data set. We then coded each article with scores of its theoretical and methodological contributions, and a set of article attributes previously identified as having an impact on citation rates.

Each article was coded independently by three subject matter experts. To unify the coding between coauthors and minimize data entry errors, we developed a macro-based data entry form in Excel, with some of the fields, such as authors, title, issue, and pages prefilled. Coders then checked appropriate boxes for each of the attributes, which translated into either 0/1 Boolean score in Excel (indicating absence or presence of a certain attribute) or an actual score for “Theory building” and “Theory testing” variables. To evaluate the rate of agreement between raters, we used the two-way random, absolute agreement interclass correlation coefficient (ICC, Koo & Li, Reference Koo and Li2016). The value of ICC for “Testing theory” variable between the three raters was 0.78; for “Building theory” variable—0.77, which indicates good reliability (Koo & Li, Reference Koo and Li2016). Nonetheless, all the differences between raters were discussed, and all articles with different scores reviewed again until all three raters reached a consensus score.

To assign a code to “Theory building” and “Theory testing” variables, we relied on instructions of Colquitt & Zapata-Phelan (Reference Colquitt and Zapata-Phelan2007). Consistent with their methodology, we allowed for half-points in addition to integers in our coding.

For coding “Method,” we used a simple 0/1 checkmark for every method that was used in the study (each of the methodological scale points was represented by a separate checkbox). Points on our proposed scale were not mutually exclusive. To compute the “Method” score, we added all points together. The resulting contribution could vary from 0 (though unlikely for a network-related study) to 28 (though also unlikely given the usual article length). For ease of interpretation, we linearly transformed the resulting score to have values with a minimum of 0 and a maximum of 10. Because it was a linear transformation, it did not affect any of the variable relationships. Resulting scale, however, was continuous.

To account for the possibility of a nonlinear impact of a “Method” variable on article citations, we also created a “Method squared” variable. Network Science journal is different from many journals in the field given its emphasis on generating new methodological knowledge. Some articles published in a journal introduce highly sophisticated, but limited application methods, which would get a high “Method” score, but might get lower citations because they are applicable to smaller research domains. A squared variable would capture this effect if present.

Control variables were presented by a set of universalistic and particularistic attributes shown to have an impact on citation rates in previous studies. Universalistic controls were article age, length, and the number of references cited. To control for particularistic attributes, we used information about each of the article’s authors, including author H-index (obtained manually from Publons), affiliation (as indicated on the article at the time of publication), gender, and number of authors. Author disambiguation was handled by locating author records in affiliated institutions listed on the articles. Affiliation variable was used to obtain the rank of the author’s university from 2020 QS rating.Footnote 5 Because we performed the analysis on the article level, not the author level, we generated variables that captured the article-level attribute for each: “Maximum H-index” (the highest of all authors for the article), and “Best university rank.” Because of the exploratory nature of the study, we wanted to examine whether “big names” (Leberman et. al., Reference Leberman, Eames and Barnett2016) or higher-ranked universities account for an additional number of citations with all variables in the study.

“Gender” variable was coded as Boolean, with a value of 1 assigned to “female.” We also calculated article-level “Average gender” (the value greater than 0.5 would indicate more female authors for the article). For each article, we also coded the number of coauthors.

We obtained descriptive statistics on all study variables, including zero-order correlations. Then, we ran a series of models to the article impact. Next, we examined trends in theory building, theory testing, and methodological contribution and isolated discrete article types based on the combination of these characteristics. We used data mining software Orange (Demšar et al., Reference Demšar, Curk, Erjavec, Gorup, Hočevar, Milutinovič and Štajdohar2013) to perform hierarchical clustering.Footnote 6 We used Manhattan distance (Sinwar & Kaushik, Reference Sinwar and Kaushik2014) as a measure of dissimilarity between observations (allowing for possible multiple solutions) and complete-linkage method as a measure of dissimilarity between clusters (Jarman, Reference Jarman2020). Data were normalized prior to analysis.

3.3 Analytical approach: Evaluating article’s impact

To examine article impact, we performed regression analysis, assessing the relationship between theory building, testing, method, and article citations on a set of 180 articles. As an analysis method, we used zero-inflated negative binomial regression (Garay et al., Reference Garay, Hashimoto, Ortega and Lachos2011; Moghimbeigi et al., Reference Moghimbeigi, Eshraghian, Mohammad and Mcardle2008; Didegah & Thelwall, Reference Didegah and Thelwall2013) in Stata.

We have chosen this regression method for several reasons. First, the dependent variable is a “counts” type variable, which is typically Poisson-distributed. However, as Table 3 shows, the dependent variable is overdispersed, with standard deviation substantially exceeding the average. This would have suggested the use of negative binomial regression. However, Figure 1 demonstrates that the dependent variable contains a substantial number of zeros (as is expected for recently published articles). Using the Vuong test for non-nested models (Long, Reference Long1997), we determined that excess zeros warranted the use of a zero-inflated negative binomial model (z =1.92, p < 0.0277). Second, this method allowed for substantial post-estimation evaluation, which allowed us to estimate the probability of having excess zeros in the dependent variable over time.

Table 3. Descriptive statistics and zero-order correlations

Note: n = 180; * p < 0.05; ** p < 0.01; standard errors in parentheses.

Figure 1. Distributions of the main study variables.

Because our study was largely exploratory, we used a multistep approach, building a series of models that evaluated impact of independent variables on article citation rates in the presence of various controls.Footnote 7 Since the standard R 2 (percentage of variance explained) could not be calculated, we used Craig & Uhler’s pseudo R 2 as the most appropriate for our model (Long, Reference Long1997). This index calculates the improvement from null model to fitted model. Though not the same as the standard R 2, it is indicative of the degree to which model parameters improve the prediction of the model (Long, Reference Long1997). Finally, we have explored the relationship that distinct article types may have with citations, using the obtained article categories as predictors.

3.4 Analytical approach: Exploring the scientific research program

To isolate the “hard core” of the Network Science research program, we used the methodology of bibliometric analysis presented in detail by Maltseva & Batagelj (Reference Maltseva and Batagelj2020) and Caloffi et al. (Reference Caloffi, Lazzeretti and Sedita2018). There are certainly many methods and tools for bibliometric analysis, including networks, which are amply described by Cobo et al. (Reference Cobo, López-Herrera, Herrera-Viedma and Herrera2011). They range from visualization of the co-occurrence network (e.g., Van Eck & Waltman, Reference Van Eck and Waltman2014; Yang et al. Reference Yang, Wu and Cui2012) to the use of Kleinberg’s burst detection algorithm and co-word occurrence analysis (e.g., Mane & Börner, Reference Mane and Börner2004). In one very similar study, Sedighi (Reference Sedighi2016) used word co-occurrence network analysis to map the scientific field in Informetrics. The principal difference of that study with the methods we use is that we go a step further, isolating “islands” within the field.

The main idea of the “island” method (De Nooy et al., Reference De Nooy, Mrvar and Batagelj2018, p. 127; Batagelj et al., Reference Batagelj and Cerinšek2013, p. 54) is to use the word co-occurrence network (which we label as $nKK$ later in this section) to explore the “hard core” of Network Science. Batagelj et al. (Reference Batagelj and Cerinšek2013) defined an “island” in a network as a maximal subgraph that is connected, and where the values of edges between the nodes in the subgraph are greater than the values of edges between the nodes in the subgraph and the rest of the network. This method allows for effective separation of dense networks into subgraphs that represent relatively cohesive communities and are easier to analyze and describe. The advantage of using the “islands” method for bibliometric analysis is that it allows identifying local communities of different sizes and link strengths in dense networks. It also makes it easier to identify nodes that do not belong to any of the found communities.

We used the same data set of 180 articles. To transform the data set into a set of networks, we used WoS2Pajek (Batagelj, Reference Batagelj2017). In line with Maltseva & Batagelj (Reference Maltseva and Batagelj2020) methodology, we created the unweighted bimodal keyword network articles-to-keywords ( $WK$ ). This network connects articles published in the Network Science journal in the period 2014–2020 to their respective keywords.

Article keywords were derived from the existing WOS fields. The fields were ID (“keywords plus”—keywords assigned to articles by the WoS platform), DE (“author keywords”—keywords assigned to articles by their authors), and TI (article titles) fields of the WoS records. These keywords were obtained by lemmatization and stopword removal. We used Wos2Pajek (Batagelj, Reference Batagelj2017) software package to transform the Web of Science records into networks ready for further analysis. The lemmatization procedure was performed automatically by WoS2Pajek, which relies on MontyLingua Python library, and applied to both the keywords and the article titles. Procedures that are implemented in this software are described in detail in a study by Maltseva & Batagelj (Reference Maltseva and Batagelj2020a), Batagelj & Maltseva, (Reference Batagelj and Maltseva2020), Maltseva & Batagelj (Reference Maltseva and Batagelj2019), and Batagelj et al. (Reference Batagelj, Ferligoj and Squazzoni2017). These studies also show that the procedure has shown acceptable performance in its history of previous use.

During our data preprocessing procedure, we removed words one character in size and words containing numbers or punctuation. The resulting articles-to-keywords network ( $WKryx$ ) consisted of 180 articles and 1,187 words.

We then used the articles-to-keywords network ( $WKryx$ ), to build a network of word co-occurrence, following Maltseva & Batagelj (Reference Maltseva and Batagelj2020) methodology. To avoid overrating contribution of papers with very large number of words, we used the fractional approach demonstrated previously in the literature (e.g., Maltseva & Batagelj, Reference Maltseva and Batagelj2021; Batagelj & Cerinšek, Reference Batagelj and Cerinšek2013; Batagelj, Reference Batagelj2020). This approach normalizes the contribution of each article, so that its input to the resulting network is equal to 1 but assumes that each keyword has equal importance. Without the normalization, the outdegree is the number of words in the article, and the indegree is the number of articles with the same words. Normalization allows to create a network where the weight of each arc is divided by the outdegree of a node, represented by the sum of weights of all arcs with the same initial node.

(1) \begin{equation}nWKryx\left[ {w,k} \right] = \frac{{WKryx\!\left[ {w,k} \right]}}{{max\!\left( {1,outdegree\!\left( w \right)} \right)}}\end{equation}

Here, w is an article and k is a keyword. Multiplying normalized transposed $nWKryx$ by itself has generated the words co-occurrence network $nKK$ .

(2) \begin{equation}nKK = nWKry{x^T}*nWKryx\end{equation}

We used the resulting co-occurrence network, $nKK,$ built from the normalized $nWKryx$ network, for exploring the “hard core” of Network Science, using the island method of Nooy et al. (2018, p. 127; Batagelj, Reference Batagelj2014, p. 54).

4. Results

4.1 Descriptive statistics and distinct article types

Table 3 contains the means, standard deviations, and zero-order correlations among study variables used to evaluate article impact. Figure 1 shows the main variable distributions.

The dependent variable, number of citations, has a mean of 4.79 and a standard deviation of 9.46, indicating overdispertion relative to the traditional Poisson distribution. Histogram of citations in Figure 1 shows an excessive number of zeros. The theory-building mean of 2.98 (S.D. = 0.99) indicates that a typical article in Network Science either introduced a new mediator or moderator of an existing relationship, or examined a previously unexplored relationship or a process. The theory-testing mean was 3.26 (S.D. = 0.90), indicating that the typical article grounded predictions with references to past findings or with existing conceptual arguments. The method mean of 3.54 (S.D. = 2.14) indicated that a typical article in Network Science either suggested a new metric or an improvement to an existing method. Distributions of these variables appear to not be creating any analytical problems.

In their study, Colquitt and Zapata-Phelan created discrete categories of articles, which represent the most common combinations of theory building and theory testing. They also describe the article’s contribution (Reporters and Qualifiers represent low levels of theoretical contribution, other types—high). We have also used the methodological contribution, and having three dimensions to analyze, we were able, nonetheless, to map the Network Science articles to the Colquitt & Zapata-Phelan’s (Reference Colquitt and Zapata-Phelan2007) typology.

Addition of methodological dimensions has allowed us to create additional article types, described by high levels of methodological contribution, which we called “explorers” and “pioneers,” increasing the complexity of the existing typology. They are distinct from other article types because they are characterized by high or very high scores on the Method dimension—meaning, they introduce new metrics or methods. For management studies, especially published in an applied journal, development of new methodology is rare, so Colquitt & Zapata-Phelan’s (Reference Colquitt and Zapata-Phelan2007) typology did not account for this contribution type. However, for Network Science, methodological contribution is essential. Network Science is an outlet that is aimed at publishing new or improved methods for analyzing networks, long before they end up in books. Therefore, studies that suggest completely new methods or metrics receive higher scores on the Method dimension.

Table 4 contains the map of the discrete article types. Note that, unlike the study by Colquitt & Zapata-Phelan (Reference Colquitt and Zapata-Phelan2007), we did not find any articles of type “tester” published in Network Science—apparently, there are no studies dedicated to testing existing theory only, without other contributions. This is not surprising, given the scope of the journal.

Table 4. Discrete article types—map to Colquitt & Zapata-Phelan (Reference Colquitt and Zapata-Phelan2007) typology

* Note: Not defined by Colquitt & Zapata-Phelan (Reference Colquitt and Zapata-Phelan2007). Low: ≤ 2.0; Medium: < 3.5.

Articles that score high on methodological contribution, as it turns out, also score relatively high on the building theory dimension. Studies that we call “Pioneers” have very high method scores and propose a new approach to modeling while simultaneously allowing to build theory. A typical example of such study is Sewell (Reference Sewell2018), providing a method to build simultaneous and temporal autoregressive network models. This study appears to be “pioneering” in nature because it proposes a substantial improvement that overcomes serious limitations (e.g., dyad conditional independence) of existing methods, extends the method to account for dyad temporal dependence, offers a new visualization method, and demonstrates how to apply this new method to two different settings, each with its own theoretical foundation. Doing so allows the authors to expand the tenets of an existing theory, because relationships that could not be explored in the past can be explored with a new method—at least a point “4” on the Colquitt & Zapata-Phelan’s (Reference Colquitt and Zapata-Phelan2007) taxonomy.

Another previously undefined type, “Explorers” score high on Method, but also relatively high (above 3) on testing and building theory. A typical article in this category is Fujimoto et al. (Reference Fujimoto, Snijders and Valente2018), which examines multivariate dynamics of one- and two-mode networks to explore sport participation among friends. The study scores high on all three dimensions: it tests existing theory of social influence, it explores previously unexplored network evolution by using a longitudinal context, and it uses a relatively complex stochastic actor-oriented multivariate dynamic model. This is substantially more complex than the “Builders” or “Expanders” designations, defined by Colquitt & Zapata-Phelan (Reference Colquitt and Zapata-Phelan2007), allow.

Examining the number of articles of each type, it is also clear that the number of articles of high theoretical contribution types (Builders, Expanders, Explorers, and Pioneers) is higher than articles with lower theoretical contribution. Figure 2 shows the graphical distribution of the automatically obtained clusters present in Network Science. Expanders, the highest level of theoretical contribution according to Colquitt & Zapata-Phelan (Reference Colquitt and Zapata-Phelan2007), constitute a substantial number of articles (49). Together with Pioneers (16), Explorers (6), and Builders (24), articles with high theoretical contribution account for over half of all articles in Network Science. Therefore, it is apparent that the journal publishes articles with a high potential for expanding knowledge.

Figure 2. Taxonomy of article contribution: Clusters of studies by contribution type.

4.2 Theoretical/methodological contribution and article impact

Table 5 presents the results of a series of zero-inflated negative binomial regression models, in which we examined the relationship between theoretical and methodological contribution and article citation rates. In line with previous research, we started with controls, with article age as an inflator of excessive zeros. In the sequence of models, we examined universalistic and particularistic controls separately, and then combined (Models 1–3). Then, we evaluated independent variables by themselves, with the Method variable evaluated alone and together with Method squared (Models 4–5). In Models 6–7, we sequentially added universalistic and particularistic controls, and the final model (Model 8) contains only the significant variables. Because the study was largely exploratory, the only hypothesis we were testing was whether a beta coefficient of any variable was significantly different from zero.

Table 5. Theoretical and methodological contribution and article impact: coefficients of the zero-inflated negative binomial regression models

Note: n = 180; * p < .05; ** p < 0.01; standard errors in parentheses.

Table 6. Individual article types and article impact: coefficients of the zero inflated negative binomial regression models

Note: n = 180; * p < .05; ** p < 0.01; standard errors in parentheses.

Model characteristics. All models are significant, as indicated by the likelihood ratio (LR) χ2, which compares the full model to a model without count predictors. Overdispersion coefficient, alpha, for all models is significantly above zero, confirming that zero-inflated negative binomial model is preferred to the zero inflated Poisson. Pseudo-R 2 cannot be interpreted the same way as the R 2 for the OLS models, but for each model, it does provide an indication of a relative improvement in explanatory power when certain variables are evaluated.

Interpretation of control variables. Results indicate that the variable used to explain inflated zeros, Age, is consistently significant and has a negative relationship with excess zeros observed over time. This is an expected result, since newer articles are more likely to have zero citations than older articles. We have performed a post-estimation test to predict the probability of excess zeros. This probability declines sharply, and by year 4, reaches the level of about 2% (Table 3S in the Supplemental Materials). Therefore, it is reasonable to expect that an average Network Science article will have at least one citation three years after publication.

Out of all control variables used in the study, only two—the number of references in the article and the best university rank—are statistically significant in some of the models. When universalistic controls (article length and number of references) are used alone, neither variable is significant. The number of references become significant in the presence of particularistic controls and independent variables. The variation of its coefficient, when combined with other variables, is relatively small. For every additional reference, the difference in the logs of expected citation counts is expected to increase by the 0.01, holding all other variables constant.

Out of particularistic controls, only the best university rank is significant, and again, its variation between the models is negligent. For a one unit increase in the rank of the university, the difference in the logs of expected citation counts is expected to decrease by approximately 0.25. Because university ranks are provided in the reverse order (the lower the better), this negative relationship is intuitive and in line with existing research: authors from better universities receive more citations. We cannot establish a causal relationship with this research, but the apparent positive association with these variables follows the findings of previous research.

Interpretation of independent variables. Out of three independent variables, apparently, theory testing has no association with the citation rates. This variable is insignificant in all the models, with and without controls. Theory building, however, has a highly significant positive relationship—also in all models. For a one unit increase in theory building, the difference in the logs of expected citation counts is expected to increase by approximately 0.5, though this coefficient varies between 0.49–0.54 and between models with various controls.

The method variable has a significant and negative relationship, which is contrary to expectations. So, we further tested the effect of the Method squared variable in Model 5. As hypothesized, the relationship between Method variable and citation rates appears to be nonlinear: the squared term is significant. A negative and significant coefficient (−0.48) indicates an inverted U-shaped relationship: the increase in the method complexity increases citations only to a certain point. The Method variable itself becomes insignificant in the presence of Method squared. This is not surprising given high multicollinearity between a variable and its polynomial. It is also possible that the polynomial variable fully captures the impact of the Method variable, including its negative linear trend.

Note that the Cragg & Uhler’s pseudo-R 2 is lowest for models with either particularistic or universalistic controls. We cannot interpret pseudo-R 2 as we can R 2, but note that models with controls only or with independent variables only have approximately equal value of this coefficient. With a combination of variables, and in a model with significant variables only (Model 8), pseudo-R 2 is the highest. Also, elimination of nonsignificant variables from the model (difference between models 7 and 8) results only in slight decrease of model characteristics (such as log likelihood, LR χ2, pseudo-R 2). Therefore, Model 8 appears to capture well the variables that affect citation counts: building new theory, method squared, number of references used in the article, and best university rank.

Additional analysis. In the last step of the modeling process, we examined whether the article type as created by the Colquitt & Zapata-Phelan (Reference Colquitt and Zapata-Phelan2007) typology is significant (Table 6). We used the same regression approach to model the impact of individual clusters on citation counts. To start, we used article types as factors and the Builders” type as a baseline (Model 1). Since the choice of the base is arbitrary and can be dictated both by methodological and theoretical reasons, we have selected the type that showed the most differences with other types. Only Expanders” were not significantly different from Builders; all other types showed lower numbers of citations in comparison.

The differential effect of these two types holds when both the particularistic and the universalistic controls are added (Models 2–3). However, when independent variables are added, none of the article types have significant effects (Model 4). Perhaps, this is because of collinearity: clusters already capture similarities between observations with respect to their Building theory, Testing Theory, and Method characteristics. However, this work confirms the observation made by Colquitt & Zapata-Phelan (Reference Colquitt and Zapata-Phelan2007) that with respect to article quality, it is the combination of characteristics that impacts the citations.

4.3 Exploring the “hard core” of Network Science: Descriptive network analysis

We turn next to the results of our attempt to answer the question poised by Brandes et al. (Reference Brandes, Robins, McCranie and Wasserman2013): is there a unity of a foundation in network science? Was the journal able to establish the Lakatos’ (Reference Lakatos1968) “hard core,” which is indicative of the journal’s solid contribution to the development of its field?

Figure 3 shows the distribution of the number of keywords in the words co-occurrence network, $nWKryx.$ Most articles cite from four to 29 words. Figure 4 shows the frequency with which words appear in the data set. Large numbers of words are mentioned one (691), two (196) or three (93) times (Figure 4). So, apparently, many keywords are unique—they are rarely used. At the same time, there are words that are used extensively, reflecting the core theories and methods used in Network Science journal. While this result is expected, it serves as a validity check of data used in the analysis.

Figure 3. Histogram of the number of words per article (nWKryx).

Figure 4. Logarithmic plots with distributions of the number of words used in all articles (nWKryx).

An exploratory analysis showed that in the words co-occurrence network ( $nKK$ ), the most frequent words that are used together with others are “network,” “social”, “model”, “dynamics,” “structure,” “graph,” “analysis,” “community,” “datum,” and “time” (Table 1S in Supplementary Materials). To separate the words used in a wider context (such as “analysis”) from those used in a narrower context (such as “graph”), we compared words used in the largest number of articles with words used most frequently in combination with others. Table 1S contains top-60 words of two networks: the words co-occurrence network ( $nKK$ ) and articles-to-keywords network ( $nWKryx$ ).

Several words (underlined in the table) are present in a relatively small number of articles, yet are connected to a large number of other keywords (“individual,” “organizational,” “perception,” and “activity”). They are the wider-context words, meaning that by looking at each word in isolation, it is not possible to correctly infer the topic of the study. For example, the word “activity” may indicate a network methodological topic—“dimensions of tie activity,” and can also be related to an applied topic—“extracurricular activity.”

Words that appear in bold in Table 1S are the keywords of a narrow context—they are present in a relatively large number of articles, yet are connected to a smaller number of other keywords (“topology,” “pattern,” “distribution,” “world,” “epidemic,” “matrix,” “blockmodel”). They delineate more narrow topics. For example, the word “blockmodel” in a network study most likely relates to one of several methods of network clustering. This is another confirmation of data validity.

Figure 5 presents a visualization of words co-occurrence network with 1,187 nodes; these are all the words that are obtained from keywords and titles of Network Science. The font size is directly proportional to the number of articles that use these words.

Figure 5. Words co-occurrence network (nKK).

As is apparent from Figure 5 and perhaps, not surprisingly, the most often used and connected word is “network.” It creates a number of important dyads, such as “network model,” “network graph,” and “social network,” each of which could be considered a separate topic or even a subfield within the “hard core” of the journal. Some of these topics are related to the methodology of network science (graph theory, centrality measures, random and stochastic graphs, cluster analysis, community detection), while others—to application of network methods in various contiguous disciplines (social behavior, friendship, and power analysis). Also, quite apparent are the combinations of theory and methods in network science that create separate model groups (social influence, social selection).

4.4 Clustering the “hard core” of Network Science: Islands

To further examine the composition of the “hard core” of Network Science, we use the islands approach (De Nooy et al., Reference De Nooy, Mrvar and Batagelj2018, p. 127; Batagelj et al., Reference Batagelj and Cerinšek2013, p. 54). This method is widely used in scientific networks and has demonstrated excellent results with respect to detecting scientific schools in many disciplines (Doreian et al., Reference Doreian, Batagelj and Ferligoj2020).

Since the study is largely exploratory, the choice of the number of cuts, which would separate the islands, was made based on the plots of changes in the number of possible islands against the maximum cut sizes (Figure 6). The minimum cut was determined based on the distribution of the number of words per article (Figure 3). Since approximately two-thirds of articles contained between 4 and 21 words, we explored a variety of different cuts, but considered further only the 4, 5, 6, 7, 8, and 9 cuts as they provided more than one island.

Figure 6. Plots of changes in the number of islands against the maximum cut sizes.

To obtain islands of reasonable size, we have considered a minimum cut of four and maximum cut with 52 words. As a result, the sizes of resulting components ranged from 4 to 49 nodes, allowing to cluster the words co-occurrence network ( nKK) into three islands. The main island contains 49 nodes (Figure 7); other smaller islands contain 4-9 words each (Table 2S of Supplementary Materials). These smaller islands represent narrow topics, more representative of other disciplines, but using network methodology. They can be described by the following keywords: “generation, sampling, note, chain, drive, respondent, recruitment, end, stickiness;” “epidemic, global, infection, tree.”

Figure 7. Main island of the words co-occurrence network (nKK).

As for the two small islands (Table 2S of Supplementary Materials), despite the wide variety of topics, they are similar due to their small number of articles in general. These topics are new for network science, but quite likely, very promising for future expansive research. The first island contains words “generation, sampling, note, chain, drive, respondent, recruitment, end, stickiness” and represents studies that are using a variety of sampling approaches. The second island contains words “epidemic, global, infection, tree” and represents studies that apply network methodology to understand spread of epidemic disease.

The main island is represented in Figure 7. It appears that this island represents the “hard core”—core topics in network science. The core of the island—“network, model, social” is located in the very dense center of the network.

The “hard core” of network science is represented by theory and methodology of network analysis. Perhaps, this was expected in line with Lakatos (Reference Lakatos1968) theorizing of what a “hard core” is, and Brandes et al. (Reference Brandes, Robins, McCranie and Wasserman2013) proposition of what network science represents. However, this analysis confirms both notions. First, the unquestionable tenets of network science are represented by the methodology of relational data analysis (highlighted in green in Figure 7). This makes network science the methodology-driven interdisciplinary field of study. The periphery of the island—the “protective belt” of Lakatos (Reference Lakatos1968) consists of various applications of network methodology.

There are two types of these applications. The first type is the applied network research in the diverse fields, highlighted in pink in Figure 7. From the relationships between the most frequent keywords, it is apparent that networks are applied to social psychology for the study of friendship and homophily; in sociology—for the study of actors and their collaboration; in political science—for power distribution and understanding international relations; in economics—for the study of trade; in management—organization. The second type of application in the “protective belt,” shown in yellow in Figure 7, is the mathematical application of network studies by the way of creating models.

This result is more impressive than could have been expected: there is indeed a core formed by theory and methodology of network analysis. But the “protective belt” consists of two layers after the core—making it a nested structure. It appears that network science develops from mathematics of graph theory to various complex models, which are then used in a variety of disciplines.

5. Discussion

The purpose of this study was to examine the contribution that Network Science journal, one of the newest and a flagship journal in the discipline, made to the network science field. Based on careful examination of existing studies, we have found a way to evaluate the journal going beyond the traditional impact factor score. Considering some of the newest methods of network analysis, we took a two-part approach to our study. First, and most importantly, we have examined all the Network Science articles published through 2020 for their universalistic characteristics, representing the contribution of the studies to the development of science. Second, we have applied the islands methodology to examine the presence of a “hard core,” the unity of a foundation for the studies in network science.

We have found that Network Science publishes articles that have high potential for the advancement of science in their respective disciplines. Over 50% of all studies have scored high on theoretical contribution. We have found a set of studies that are “pioneers” or “explorers,” with higher-than-average potential for significant contribution.

Since we are using the methodology of Colquitt & Zapata-Phelan (Reference Colquitt and Zapata-Phelan2007), a comparison with data presented in their study (the study of the Academy of Management journal, a typical top “applied” journal) indicates a different pattern of theory development by the Network Science journal: it is lower on the theory testing and higher on the theory-building dimensions. The referent study did not analyze the impact of the method, but apparently, in Network Science it is quite substantial, with an average article suggesting at least the new metric. Also, a departure from the analysis of an applied journal, method correlates positively and significantly with building theory, but negatively with testing theory. This means that when the Network Science article builds a new theory, it is doing so by using more rigorous methods. Theory building and theory testing dimensions have no correlation with each other, meaning that the study does either one or the other, but not both—also a departure from the applied journal.

We also found it interesting that when analyzing contribution of article types to citations, our results indicated that “Builders” and “Expanders” were not significantly different from each other, but all other article types had lower citations than “Builders.” This includes two new types that we have identified in this study—“Pioneers” and “Explorers,” which differ by introducing more rigorous methods in addition to theoretical contribution. While it may be tempting to say that articles with more difficult methods are cited less, even when they are theoretically rigorous, it is too early to do so. First, the sample size for this analysis is too small—there were only six “Explorer” articles, for example. Second, the journal is still too new, and later research could examine the effect of these article types with more data.

Our second contribution is development of a method for evaluating methodological impact of a network-analytic study. Since network analysis is a field where the method is paramount, having a measure that can assess the article’s methodological novelty helps evaluate its overall contribution to the field. Moreover, we found that the impact of the Method variable is not linear: articles that introduce highly sophisticated methods are less likely to be cited. Whether this is because they are too complex for the average audience to understand or too narrow in their application remains a subject for further study.

We have also found that the “unity of a foundation” of the network science field, theorized by Brandes et al. (Reference Brandes, Robins, McCranie and Wasserman2013), does indeed exist, established by the Network Science journal in a relatively short period of time. The “hard core” of Network Science consists of network analysis methods; the protective belt—of applied studies and mathematical network-analytic models. While this may seem like a commonsense conclusion, it is actually not so. First, Brandes et al. (Reference Brandes, Robins, McCranie and Wasserman2013) were cautious in their definition, not firmly specifying the unifying foundation. It was entirely possible that the main island may not have been isolated by our study; the field could have consisted of a large number of disconnected components, each in their own isolated discipline. There are only two small islands with very specific, narrow topics; the entire field could have been that fragmented. Second, we have found that the “protective belt” is a two-layer structure, showing the evolution of the field’s development. These findings provide solid evidence for the existence of the “hard core” in the new science that Network Science journal is helping advance. And that, in fact, is its contribution, measured much beyond a simple impact factor.

6. Limitations and directions for future research

This study has several limitations. First, the sample size is too small, and the span of the study is too short. The journal is not yet ten years old, and in most disciplines, it would be still considered a new journal. As a result, most trends and tendencies, available for examination using the universalistic approach to article impact, were not found to be statistically significant in this study. Perhaps, a replication in several years is warranted. Second, having isolated different article types similar to Colquitt & Zapata-Phelan’s (Reference Colquitt and Zapata-Phelan2007) study, we could have attempted to create islands for different types. Doing so would have made the study much more complex, hardly fitting into the scope of one article. We leave it as one of possible venues for future research.

Another very important limitation is that we used articles’ citations as a proxy for journal impact. Moreover, having assumed that article citations reflect journal impact, we then tested the factors that predict the number of articles’ citations. This represents a leap from factors affecting the number of article citations to journal impact, without evaluating how the number of citations influences the impact of the journal. While many measures of journal impact are based on article citations, there is a clear agreement in the literature that citation-based measures of journal impact are, at best, incomplete. Given that we have also analyzed the “hard core” of Network Science, we believe we have established the impact of the journal on the field, even with a leap from article citations to journal impact. However, future research could examine journal impact using some alternative measure.

We clearly see several key research directions for further study beyond the mere replication. First, a more substantive examination of citing and cited journals for Network Science articles would help understand the interdisciplinary relationships and the impact of the journal on other disciplines. Second, the coevolution of the journal and the field—citation trends over time—could provide some insightful information into how the field of network science is developing. It would be also interesting to see whether the advanced modeling techniques (for “pioneer” and “explorer” article types) allow some of the applied fields to develop breakthrough ideas. Most certainly, this is a possibility for such fields as medicine and epidemiology, sociology and political science, environmental studies, and many others. With the current trends in theory building and high-quality methodology development, Network Science appears to be more than just the flagship journal. It provides the necessary tools—theory and methodology—for expanding the boundaries of interdisciplinary research and advancing science in all contiguous disciplines.

Another possible route of improvement is through the use of more modern text processing techniques. First, it would be beneficial to focus on key phrases instead of keywords, as this would allow us to better interpret the meaning of the networks and to create more accurate maps of existing schools of thought. The second important improvement that can be made in that regard is by using a better technique for text preprocessing, for example, using a better algorithm for lemmatization and stopword removal. For example, a phrase detection algorithm could be applied to the data in order to extract the phrases that commonly appear in the dataset rather than individual words (e.g. community detection, actor-oriented model). An example of such an algorithm based on the research of Mikolov et al. (Reference Mikolov, Sutskever, Chen, Corrado and Dean2013) and Gerlof (Reference Gerlof2009) is implemented in the Gensim Python library. However, comparison of the performance of different methods of lemmatization and text preprocessing in general is outside the scope of this study and remains the subject of future work.

Yet another possible revenue for future research on the topic is the use of community detection algorithms that are designed to work with bipartite networks, e.g. the algorithm proposed by Zhang & Ahn (Reference Zhang and Ahn2015). Comparison of different methods of community detection when applied to bibliometric networks, and all other mentioned improvements, could help advance the “Science of science” on its quest to evaluate journal quality.

Competing interests

None.

Supplementary materials

For supplementary material for this article, please visit https://doi.org/10.1017/nws.2022.41

Footnotes

Guest Editors (Special Issue on Scientific Networks): Dmitry Zaytsev, Noshir Contactor

1 Network Science SCImago 2020 scientific journal ranking score is 0.61, and Scopus H-index is 18. https://www.scimagojr.com/journalsearch.php?q=21100464772&tip=sid&clean=0.

2 i10-Index is an index measuring impact; it was introduced by Google Scholar in 2011 (He et al., Reference Austin2014) and provides the number of articles cited 10 times or more.

4 The data were collected in May of 2021.

6 We would like to thank anonymous reviewers for this suggestion.

7 We thank anonymous reviewer for this suggestion.

References

Ahmad, S., Sohail, M., Waris, A., Abdel-Magid, I. M., Pattukuthu, A. & Azad, M. S. (2019). Evaluating journal quality: A review of journal citation indicators and ranking in library and information science core journals. COLLNET Journal of Scientometrics and Information Management, 13(2), 345363.CrossRefGoogle Scholar
Allen, N., & Heath, O. (2013). Reputations and research quality in British political science: The importance of journal and publisher rankings in the 2008 RAE. The British Journal of Politics and International Relations, 15(1), 147162.CrossRefGoogle Scholar
Annalingam, A., Damayanthi, H., Jayawardena, R., & Ranasinghe, P. (2014). Determinants of the citation rate of medical research publications from a developing country. Springerplus, 3(1), 140.CrossRefGoogle ScholarPubMed
Austin, A. (1993). Reliability of citation counts in judgments on promotion, tenure, and status, the essay. Arizona Law Review, 35(4), 829–840.Google Scholar
Ayres, I., & Vars, F. E. (2000). Determinants of citations to articles in elite law reviews. The Journal of Legal Studies, 29(S1), 427450.CrossRefGoogle Scholar
Baldi, S. (1998). Normative versus social constructivist processes in the allocation of citations: A network-analytic model. American Sociological Review, 63(6), 829846. doi: 10.2307/2657504.CrossRefGoogle Scholar
Barabási, A.-L. (2016). Network science (1st ed.). Cambridge, UK: Cambridge University Press.Google Scholar
Batagelj, V., Doreian, P., Ferligoj, A., & Kejzar, N. (2014). Understanding large temporal networks and spatial networks: Exploration, pattern searching, visualization and network evolution (1st ed.). Southern Gate, Chichester, West Sussex: Wiley.CrossRefGoogle Scholar
Batagelj, V. (2017). WoS2Pajek. Networks from Web of Science. Version 1.5. Retrieved from http://vladowiki.fmf.uni-lj.si/doku.php?id=pajek:wos2pajek Google Scholar
Batagelj, V. (2020). On fractional approach to analysis of linked networks. Scientometrics. doi: 10.1007/s11192-020-03383-y CrossRefGoogle Scholar
Batagelj, V., & Cerinšek, M. (2013). On bibliographic networks. Scientometrics, 96(3), 845864.CrossRefGoogle Scholar
Batagelj, V., Ferligoj, A., & Squazzoni, F. (2017). The emergence of a field: a network analysis of research on peer review. Scientometrics, 113, 503532. Retrieved from https://doi.org/10.1007/s11192-017-2522-8 CrossRefGoogle Scholar
Batagelj, V., & Maltseva, D. (2020). Temporal bibliographic networks. Journal of Informetrics, 14(1), 101006. Retrieved from https://doi.org/10.1016/j.joi.2020.101006 CrossRefGoogle Scholar
Bedeian, A. G., & Feild, H. S. (1980). Academic stratification in graduate management programs: Departmental prestige and faculty hiring patterns. Journal of Management, 6(2), 99115.CrossRefGoogle Scholar
Begel, A., Bosch, J., & Storey, M. A. (2013). Social networking meets software development: Perspectives from github, msdn, stack exchange, and topcoder. IEEE Software, 30(1), 5266.CrossRefGoogle Scholar
Bellavitis, C., Filatotchev, I., & Souitaris, V. (2017). The impact of investment networks on venture capital firm performance: A contingency framework. British Journal of Management, 28(1), 102119.CrossRefGoogle Scholar
Beyer, J. M., Chanove, R. G., & Fox, W. B. (1995). The review process and the fates of manuscripts submitted to AMJ. Academy of Management Journal, 38(5), 12191260.CrossRefGoogle Scholar
Biglan, A. (1973). The characteristics of subject matter in different academic areas. Journal of applied Psychology, 57(3), 195.CrossRefGoogle Scholar
Borgatti, S. P., & Halgin, D. S. (2011). On network theory. Organization Science, 22(5), 11681181.CrossRefGoogle Scholar
Borgman, C. L. (1990). Scholarly communication and bibliometrics. Newbury Park: Sage Publications.Google Scholar
Brandes, U., & Erlebach, T. (Eds.). (2005). Network analysis: Methodological foundations (2005th ed.). Berlin Heidelberg New York: Springer.CrossRefGoogle Scholar
Brandes, U., Robins, G., McCranie, A., & Wasserman, S. (2013). What is network science?. Network Science, 1(1), 115.CrossRefGoogle Scholar
Bornmann, L. (2013). What is societal impact of research and how can it be assessed? A literature survey. Journal of the American Society for Information Science and Technology, 64(2), 217233.CrossRefGoogle Scholar
Butler, D. (2013). Investigating journals: The dark side of publishing. Nature, 495, 433435. CrossRefGoogle ScholarPubMed
Calabretta, G., Durisin, B., & Ogliengo, M. (2011). Uncovering the intellectual structure of research in business ethics: A journey through the history, the classics, and the pillars of Journal of Business Ethics. Journal of Business Ethics, 104(4), 499524.CrossRefGoogle Scholar
Callon, M., Courtial, J. P., Turner, W. A., & Bauin, S. (1983). From translations to problematic networks: An introduction to co-word analysis. Information (International Social Science Council), 22(2), 191235.CrossRefGoogle Scholar
Caloffi, A., Lazzeretti, L., & Sedita, S. R. (2018). The story of cluster as a cross-boundary concept: From local development to management studies. In Advances in Spatial Science (pp. 123137). doi: 10.1007/978-3-319-90575-4_8 Google Scholar
Carrington, P. J., Scott, J., & Wasserman, S. (Eds.). (2005). Models and methods in social network analysis (Illustrated ed.). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Carroll, J. M., & Swatman, P. A. (2000). Structured-case: A methodological framework for building theory in information systems research. European Journal of Information Systems, 9(4), 235242.CrossRefGoogle Scholar
Chavarro, D., Ràfols, I., & Tang, P. (2018). To what extent is inclusion in the Web of Science an indicator of journal ‘quality’?. Research Evaluation, 27(2), 106118.CrossRefGoogle Scholar
Clark, G., & Phillips, A. (2014). Inside book publishing (5th ed.). London and New York: Routledge.CrossRefGoogle Scholar
Cobo, M. J., López-Herrera, A. G., Herrera-Viedma, E., & Herrera, F. (2011). Science mapping software tools: Review, analysis, and cooperative study among tools. Journal of the American Society for Information Science and Technology, 62(7), 13821402.CrossRefGoogle Scholar
Cole, S. (1979). Age and scientific performance. American Journal of Sociology, 84(4), 958977.CrossRefGoogle Scholar
Cole, J. R., & Cole, S. (1974). Social stratification in science. American Journal of Physics, 42(10), 923924.CrossRefGoogle Scholar
Colquitt, J. A., & Zapata-Phelan, C. P. (2007). Trends in theory building and theory testing: A Five-decade study of the Academy of Management Journal. Academy of Management Journal, 50(6), 12811303.CrossRefGoogle Scholar
Corley, K. G., & Gioia, D. A. (2011). Building theory about theory building: What constitutes a theoretical contribution?. Academy of Management Review, 36(1), 1232.CrossRefGoogle Scholar
Cosentino, V., Izquierdo, J. L. C., & Cabot, J. (2017). A systematic mapping study of software development with GitHub. IEEE Access, 5, 71737192.CrossRefGoogle Scholar
Coursaris, C. K., & Van Osch, W. (2014). A scientometric analysis of social media research (2004–2011). Scientometrics, 101(1), 357380.CrossRefGoogle Scholar
Cunningham, G. B. (2013). Theory and theory development in sport management. Sport Management Review, 16(1), 14.CrossRefGoogle Scholar
De Nooy, W., Mrvar, A., & Batagelj, V. (2018). Exploratory social network analysis with Pajek: Revised and expanded edition for updated software (Expanded ed.). Cambridge: Cambridge University Press.Google Scholar
Demšar, J., Curk, T., Erjavec, A., Gorup, Č., Hočevar, T., Milutinovič, M., … Štajdohar, M. (2013). Orange: Data mining toolbox in Python. The Journal of Machine Learning Research, 14(1), 23492353.Google Scholar
Dewey, J. (1910). Science as subject-matter and as method. Science, 31(787), 121127.CrossRefGoogle ScholarPubMed
Didegah, F., & Thelwall, M. (2013). Determinants of research citation impact in nanoscience and nanotechnology. Journal of the American Society for Information Science and Technology, 64(5), 10551064.CrossRefGoogle Scholar
Doreian, P., Batagelj, V., & Ferligoj, A. (Eds.). (2020). Advances in network clustering and blockmodeling (1st ed.). Hoboken, NJ: Wiley.Google Scholar
Easley, D., & Kleinberg, J. (2010). Networks, crowds, and markets (Vol. 8). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Eran, T. (2017). Measurement in science. The Stanford Encyclopedia of Philosophy. Retrieved from https://plato.tanford.Edu/archives/fall2017/entries/measurement-science.Google Scholar
Ewing, G. J. (1966). Citation of Articles from Volume 58 of the Journal of Physical Chemistry. Journal of Chemical Documentation, 6(4), 247250.CrossRefGoogle Scholar
Fortunato, S., Bergstrom, C. T., Börner, K., Evans, J. A., Helbing, D., Milojević, S., … Barabási, A.-L. (2018). Science of science. Science, 359(6379).CrossRefGoogle ScholarPubMed
Fujimoto, K., Snijders, T. A., & Valente, T. W. (2018). Multivariate dynamics of one-mode and two-mode networks: Explaining similarity in sports participation among friends. Network Science, 6(3), 370395.CrossRefGoogle Scholar
Gabbidon, S. L., Higgins, G., & Martin, F. (2010). Press rankings in criminology/criminal justice: A preliminary assessment of book publisher quality. Journal of Criminal Justice Education, 21(3), 229244.CrossRefGoogle Scholar
Galiani, S., & Gálvez, R. H. (2017). The life cycle of scholarly articles across fields of research (No. w23447). National Bureau of Economic Research.CrossRefGoogle Scholar
Garay, A. M., Hashimoto, E. M., Ortega, E. M., & Lachos, V. H. (2011). On estimation and influence diagnostics for zero-inflated negative binomial regression models. Computational Statistics & Data Analysis, 55(3), 13041318.CrossRefGoogle Scholar
Gardner, W. L., Cogliser, C. C., Davis, K. M., & Dickens, M. P. (2011). Authentic leadership: A review of the literature and research agenda. The Leadership Quarterly, 22(6), 11201145.CrossRefGoogle Scholar
Garfield, E. (1979). Is citation analysis a legitimate evaluation tool?. Scientometrics, 1(4), 359375.CrossRefGoogle Scholar
Gerlof, B. (2009). Normalized (pointwise) mutual information in collocation extraction. Proceedings of GSCL, 30, 3140.Google Scholar
Goodreau, S. M. (2007). Advances in exponential random graph (p*) models applied to a large social network. Social Networks, 29(2), 231248.CrossRefGoogle ScholarPubMed
Gottfredson, S. D. (1978). Evaluating psychological research reports: Dimensions, reliability, and correlates of quality judgments. American Psychologist, 33(10), 920.CrossRefGoogle Scholar
Granovsky, Y. V. (2001). Is it possible to measure science? VV Nalimov’s research in scientometrics. Scientometrics, 52(2), 127150.CrossRefGoogle Scholar
Greco, A. N. (2013). The book publishing industry (3rd ed.). New York: Routledge.CrossRefGoogle Scholar
Havlin, S., Kenett, D. Y., Ben-Jacob, E., Bunde, A., Cohen, R., Hermann, H., … Portugali, J. (2012). Challenges in network science: Applications to infrastructures, climate, social systems and economics. The European Physical Journal Special Topics, 214(1), 273293.CrossRefGoogle Scholar
He, C., Tan, C., & Lotfipour, S. (2014). WestJEM’s impact factor, h-index, and i10-index: Where we stand. Western Journal of Emergency Medicine: Integrating Emergency Care with Population Health, 15(1).Google Scholar
Helmreich, R. L., Spence, J. T., Beane, W. E., Lucker, G. W., & Matthews, K. A. (1980). Making it in academic psychology: Demographic and personality correlates of attainment. Journal of Personality and Social Psychology, 39(5), 896.CrossRefGoogle Scholar
Hull, R. P., & Wright, G. B. (1990). Faculty perceptions of journal quality: An update. Accounting Horizons, 4(1), 77.Google Scholar
Jarman, A. M. (2020). Hierarchical Cluster Analysis: Comparison of Single linkage, Complete linkage, Average linkage and Centroid Linkage Method.Google Scholar
Järvinen, P. (2011). A new taxonomy for developing and testing theories. In Emerging themes in information systems and organization studies (pp. 2132). Physica-Verlag HD.CrossRefGoogle Scholar
Jarwal, S. D., Brion, A. M., & King, M. L. (2009). Measuring research quality using the journal impact factor, citations and ‘Ranked Journals’: Blunt instruments or inspired metrics?. Journal of Higher Education Policy and Management, 31(4), 289300.CrossRefGoogle Scholar
Jawad, F. (2020). The Journal Impact Factor-Does it reflect the quality of a journal? JPMA. The Journal of the Pakistan Medical Association, 70(1), 1–2.Google Scholar
Jia, L., You, S., & Du, Y. (2012). Chinese context and theoretical contributions to management and organization research: A three-decade review. Management and Organization Review, 8(1), 173209.CrossRefGoogle Scholar
Johnson, D. (1997). Getting noticed in economics: The determinants of academic citations. The American Economist, 41(1), 4352.CrossRefGoogle Scholar
Judge, T. A., Cable, D. M., Colbert, A. E., & Rynes, S. L. (2007). What causes a management article to be cited—article, author, or journal?. Academy of Management Journal, 50(3), 491506.CrossRefGoogle Scholar
Kadushin, C. (2012). Understanding social networks: Theories, concepts, and findings. USA: OUP.Google Scholar
Kalliamvakou, E., Gousios, G., Blincoe, K., Singer, L., German, D. M., & Damian, D. (2014). The promises and perils of mining GitHub. In Proceedings of the 11th working conference on mining software repositories (pp. 92101).CrossRefGoogle Scholar
Khan, B. S., & Niazi, M. A. (2017). Network community detection: A review and visual survey. arXiv preprint arXiv:1708.00977.Google Scholar
Koo, T. K., & Li, M. Y. (2016). A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine, 15(2), 155163.CrossRefGoogle ScholarPubMed
Lakatos, I. (1968). Criticism and the methodology of scientific research programmes. In Proceedings of the Aristotelian society (Vol. 69, pp. 149186). Aristotelian Society, Wiley.Google Scholar
Lariviere, V., & Sugimoto, C. R. (2019). The journal impact factor: A brief history, critique, and discussion of adverse effects. In Springer handbook of science and technology indicators (pp. 324). Cham: Springer.CrossRefGoogle Scholar
Law, R., & Leung, D. (2020). Journal impact factor: A valid symbol of journal quality?. Tourism Economics, 26(5), 734742.CrossRefGoogle Scholar
Leberman, S. I., Eames, B., & Barnett, S. (2016). Unless you are collaborating with a big name successful professor, you are unlikely to receive funding. Gender and Education, 28(5), 644661.CrossRefGoogle Scholar
Levy, P. (2003). A methodological framework for practice-based research in networked learning. Instructional Science, 31(1-2), 87109.CrossRefGoogle Scholar
Lima, A., Rossi, L., & Musolesi, M. (2014). Coding together at scale: GitHub as a collaborative social network. In Eighth international AAAI conference on weblogs and social media.CrossRefGoogle Scholar
Lindsey, D. (1989). Using citation counts as a measure of quality in science measuring what’s measurable rather than what’s valid. Scientometrics, 15(3-4), 189203.CrossRefGoogle Scholar
Lister, R., & Box, I. (2009). A citation analysis of the ACSC 2006-2008 proceedings, with reference to the CORE conference and journal rankings. In Conferences in research and practice in information technology series.Google Scholar
Long, J. S. (1997). Regression models for categorical and limited dependent variables (1st ed.). Thousand Oaks: SAGE Publications, Inc. Google Scholar
MacRoberts, M. H., & MacRoberts, B. R. (1989). Problems of citation analysis: A critical review. Journal of the American Society for Information Science, 40(5), 342349.3.0.CO;2-U>CrossRefGoogle Scholar
Maltseva, D., & Batagelj, V. (2019). Social network analysis as a field of invasions: Bibliographic approach to study SNA development. Scientometrics. doi: 10.1007/s11192-019-03193-x CrossRefGoogle Scholar
Maltseva, D., & Batagelj, V. (2020). Towards a systematic description of the field using keywords analysis: Main topics in social networks. Scientometrics, 123(1), 357382.CrossRefGoogle Scholar
Maltseva, D., & Batagelj, V. (2020a). iMetrics: The development of the discipline with many names. Scientometrics. doi: 10.1007/s11192-020-03604-4 CrossRefGoogle Scholar
Maltseva, D., & Batagelj, V. (2021). Journals publishing social network analysis. Scientometrics, 126, 35933620. https://doi.org/10.1007/s11192-021-03889-z CrossRefGoogle Scholar
Mane, K. K., & Börner, K. (2004). Mapping topics and topic bursts in PNAS. Proceedings of the National Academy of Sciences, 101(suppl 1),52875290.CrossRefGoogle ScholarPubMed
Marshakova-Shaikevich, I. (1996). The standard impact factor as an evaluation tool of science fields and scientific journals. Scientometrics, 35(2), 283290.CrossRefGoogle Scholar
Mialon, H. M. (2010). What’s in a Title? Predictors of the Productivity of Publications. Predictors of the Productivity of Publications (July 1, 2010).CrossRefGoogle Scholar
Mikolov, T., Sutskever, I., Chen, K., Corrado, G., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. ArXiv:1310.4546 [Cs, Stat]. http://arxiv.org/abs/1310.4546 Google Scholar
Mingers, J., & Leydesdorff, L. (2015). A review of theory and practice in scientometrics. European Journal of Operational Research, 246(1), 119.CrossRefGoogle Scholar
Mingers, J., & Yang, L. (2017). Evaluating journal quality: A review of journal citation indicators and ranking in business and management. European Journal of Operational Research, 257(1), 323337.CrossRefGoogle Scholar
Mitra, A. C. (1970). The Bibliographical reference: A review of its role. Annals of Library Science, 52(8), 117123.Google Scholar
Moghimbeigi, A., Eshraghian, M. R., Mohammad, K., & Mcardle, B. (2008). Multilevel zero-inflated negative binomial regression modeling for over-dispersed count data with extra zeros. Journal of Applied Statistics, 35(10), 11931202.CrossRefGoogle Scholar
Newman, J. M., & Cooper, E. (1993). Determinants of academic recognition: The case of the Journal of Applied Psychology. Journal of Applied Psychology, 78(3), 518.CrossRefGoogle Scholar
Newman, M. (2018). Networks (2nd ed.). Oxford, UK and New York, NY: Oxford University Press.CrossRefGoogle Scholar
Ortu, M., Hall, T., Marchesi, M., Tonelli, R., Bowes, D., & Destefanis, G. (2018). Mining communication patterns in software development: A github analysis. In Proceedings of the 14th international conference on predictive models and data analytics in software engineering (pp. 70–79).CrossRefGoogle Scholar
Onodera, N., & Yoshikane, F. (2015). Factors affecting citation rates of research articles. Journal of the Association for Information Science and Technology, 66(4), 739764.CrossRefGoogle Scholar
Patty, J. W., & Penn, E. M. (2017). Network theory and political science. In Victor, J. N, Montgomery, A. H., & Lubell, M. (Eds.), The Oxford Handbook of Political Networks (pp. 147171). Oxford University Press.Google Scholar
Peritz, B. C. (1983). A classification of citation roles for the social sciences and related fields. Scientometrics, 5(5), 303312.CrossRefGoogle Scholar
Petersen, A. M., & Penner, O. (2014). Inequality and cumulative advantage in science careers: A case study of high-impact journals. EPJ Data Science, 3(1), 24.CrossRefGoogle Scholar
Petruzzelli, A. M., Rotolo, D., & Albino, V. (2015). Determinants of patent citations in biotechnology: An analysis of patent influence across the industrial and organizational boundaries. Technological Forecasting and Social Change, 91, 208221.CrossRefGoogle Scholar
Phelan, T. J. (1999). A compendium of issues for citation analysis. Scientometrics, 45(1), 117–136. doi: 10.1007/BF02458472 CrossRefGoogle Scholar
Raff, M., Johnson, A., & Walter, P. (2008). Painful publishing. Science, 321(5885), 36.CrossRefGoogle ScholarPubMed
Rossiter, M. W. (1993). The Matthew Matilda effect in science. Social Studies of Science, 23(2), 325341.CrossRefGoogle Scholar
Rynes, S. L. (2012). The research-practice gap in I/O psychology and related fields: Challenges and potential solutions. In The Oxford handbook of organizational psychology (Vol. 1, pp. 409452).CrossRefGoogle Scholar
Saha, S., Saint, S., & Christakis, D. A. (2003). Impact factor: A valid measure of journal quality?. Journal of the Medical Library Association, 91(1), 42.Google ScholarPubMed
Scott, J., & Carrington, P. J. (Eds.). (2011). The SAGE handbook of social network analysis (1st ed.). London Thousand Oaks, CA: SAGE Publications Ltd.Google Scholar
Sedighi, M. (2016). Application of word co-occurrence analysis method in mapping of the scientific fields (case study: The field of Informetrics). Library Review, 65(1/2), 52–64. doi: 10.1108/LR-07-2015-0075 CrossRefGoogle Scholar
Sewell, D. K. (2018). Simultaneous and temporal autoregressive network models. Network Science, 6(2), 204231.CrossRefGoogle Scholar
Shadish, W. R., Tolliver, D., Gray, M., & Sen Gupta, S. K. (1995). Author judgements about works they cite: Three studies from psychology journals. Social Studies of Science, 25(3), 477498.CrossRefGoogle Scholar
Shah, S. K., & Corley, K. G. (2006). Building better theory by bridging the quantitative–qualitative divide. Journal of Management Studies, 43(8), 18211835.CrossRefGoogle Scholar
Simon, H. A., & Groen, G. J. (1973). Ramsey eliminability and the testability of scientific theories. The British Journal for the Philosophy of Science, 24(4), 367380.CrossRefGoogle Scholar
Simon, H. (1994). Testability and approximation. In The philosophy of economics (pp. 214–216).Google Scholar
Simons, K. (2008). The misused impact factor. Science, 322(5899), 165165.CrossRefGoogle ScholarPubMed
Singh, D., & Delios, A. (2017). Corporate governance, board networks and growth in domestic and international markets: Evidence from India. Journal of World Business, 52(5), 615627.CrossRefGoogle Scholar
Sinwar, D., & Kaushik, R. (2014). Study of Euclidean and Manhattan distance metrics using simple k-means clustering. International Journal for Research in Applied Science and Engineering Technology, 2(5), 270274.Google Scholar
Small, H. (1973). Co-citation in the scientific literature: A new measure of the relationship between two documents. Journal of the American Society for Information Science, 24(4), 265269.CrossRefGoogle Scholar
Smirnov, I. (2019). Schools are segregated by educational outcomes in the digital space. PLOS ONE, 14(5), e0217142 (1–9). doi: 10.1371/journal.pone.0217142 CrossRefGoogle ScholarPubMed
Smirnov, I., & Thurner, S. (2017). Formation of homophily in academic performance: Students change their friends rather than performance. PLOS ONE, 12(8), e0183473 (1–16). doi: 10.1371/journal.pone.0183473 CrossRefGoogle ScholarPubMed
Van Eck, N. J., & Waltman, L. (2014). Visualizing bibliometric networks. In Measuring scholarly impact (pp. 285320). Cham: Springer.CrossRefGoogle Scholar
Van Raan, A. F. (2014). Advances in bibliometric analysis: Research performance assessment and science mapping. In Bibliometrics use and abuse in the review of research performance (Vol. 87, pp. 1728).Google Scholar
Vasilescu, B., Serebrenik, A., Devanbu, P., & Filkov, V. (2014). How social Q&A sites are changing knowledge sharing in open source software communities. In Proceedings of the 17 th ACM conference on computer supported cooperative work & social computing (pp. 342354).Google Scholar
Verbeek, A., Debackere, K., Luwel, M., & Zimmermann, E. (2002). Measuring progress and evolution in science and technology–I: The multiple uses of bibliometric indicators. International Journal of Management Reviews, 4(2), 179211.CrossRefGoogle Scholar
Wasserman, S., & Faust, K. (1994). Social network analysis: Methods and applications (1st ed.). Cambridge New York: Cambridge University Press.CrossRefGoogle Scholar
Weber, R. (2012). Evaluating and developing theories in the information systems discipline. Journal of the Association for Information Systems, 13(1), 1.CrossRefGoogle Scholar
Xie, Y., Wu, Q., & Li, X. (2019). Editorial team scholarly index (ETSI): An alternative indicator for evaluating academic journal reputation. Scientometrics, 120(3), 13331349.CrossRefGoogle Scholar
Yang, Y., Wu, M., & Cui, L. (2012). Integration of three visualization methods based on co-word analysis. Scientometrics, 90(2), 659673 CrossRefGoogle Scholar
Yaveroglu, O. N., Fitzhugh, S. M., Kurant, M., Markopoulou, A., Butts, C. T., & Przulj, N. (2014). Ergm. Graphlets: A package for ERG modeling based on graphlet statistics. arXiv preprint arXiv:1405.7348.Google Scholar
Zahra, S. A., & Newey, L. R. (2009). Maximizing the impact of organization science: Theory-building at the intersection of disciplines and/or fields. Journal of Management Studies, 46(6), 10591075.CrossRefGoogle Scholar
Zhang, L., Qian, Q., Zhang, S., Zhang, W., & Zhu, K. (2019). A new journal ranking method: the reputation analysis of citation behavior model. IEEE Access, 7, 1938219394.CrossRefGoogle Scholar
Zhang, Z.-Y., & Ahn, Y.-Y.. (2015). Community detection in bipartite networks using weighted symmetric binary matrix factorization. International Journal of Modern Physics C 26( 09), 1550096.CrossRefGoogle Scholar
Zhao, R., & Wei, M. (2017). Impact evaluation of open source software: An altmetrics perspective. Scientometrics, 110(2), 10171033.CrossRefGoogle Scholar
Figure 0

Table 1. Colquitt & Zapata-Phelan (2007) taxonomy

Figure 1

Table 2. Scale of methodological contribution with examples from classical network analysis books

Figure 2

Table 3. Descriptive statistics and zero-order correlations

Figure 3

Figure 1. Distributions of the main study variables.

Figure 4

Table 4. Discrete article types—map to Colquitt & Zapata-Phelan (2007) typology

Figure 5

Figure 2. Taxonomy of article contribution: Clusters of studies by contribution type.

Figure 6

Table 5. Theoretical and methodological contribution and article impact: coefficients of the zero-inflated negative binomial regression models

Figure 7

Table 6. Individual article types and article impact: coefficients of the zero inflated negative binomial regression models

Figure 8

Figure 3. Histogram of the number of words per article (nWKryx).

Figure 9

Figure 4. Logarithmic plots with distributions of the number of words used in all articles (nWKryx).

Figure 10

Figure 5. Words co-occurrence network (nKK).

Figure 11

Figure 6. Plots of changes in the number of islands against the maximum cut sizes.

Figure 12

Figure 7. Main island of the words co-occurrence network (nKK).

Supplementary material: File

Kuskova et al. supplementary material

Tables S1-S3

Download Kuskova et al. supplementary material(File)
File 36.6 KB