Hostname: page-component-586b7cd67f-r5fsc Total loading time: 0 Render date: 2024-11-30T16:05:42.206Z Has data issue: false hasContentIssue false

Proxy failures in practice: Examples from the sociology of science

Published online by Cambridge University Press:  13 May 2024

Jakob Kapeller*
Affiliation:
Institute for Socio-Economics, University of Duisburg-Essen, Duisburg, Germany https://www.uni-due.de/soziooekonomie/institut_en Institute for Comprehensive Analysis of the Economy (ICAE), Johannes Kepler University, Linz, Austria. [email protected] [email protected] [email protected] https://www.jku.at/en/institute-for-comprehensive-analysis-of-the-economy/
Stephan Pühringer
Affiliation:
Institute for Comprehensive Analysis of the Economy (ICAE), Johannes Kepler University, Linz, Austria. [email protected] [email protected] [email protected] https://www.jku.at/en/institute-for-comprehensive-analysis-of-the-economy/
Johanna Rath
Affiliation:
Institute for Comprehensive Analysis of the Economy (ICAE), Johannes Kepler University, Linz, Austria. [email protected] [email protected] [email protected] https://www.jku.at/en/institute-for-comprehensive-analysis-of-the-economy/
Matthias Aistleitner
Affiliation:
Institute for Comprehensive Analysis of the Economy (ICAE), Johannes Kepler University, Linz, Austria. [email protected] [email protected] [email protected] https://www.jku.at/en/institute-for-comprehensive-analysis-of-the-economy/
*
Corresponding author: Jakob Kapeller; Email: [email protected]

Abstract

Following John et al., we provide examples of failing proxies that might help to contextualize the role of proxy failures in applied research. We focus on examples from the sociology of science and illustrate how the notion of proxy failure can sharpen applied analysis, if used in a way that does not obscure other dysfunctional effects of proxies.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

We applaud the efforts of John et al. to develop a general theory on the dysfunctionality of proxies, by providing a clear definition for a proxy failure based on a “unifying mechanism,” that can be observed in diverse social and biological systems. This mechanism states that optimization in complex systems creates an endogenous tendency for the proxy to diverge from the true underlying goal.

However, proxy failures in the spirit of John et al. are not the only way in which proxies can fail, mostly because there is typically only a partial overlap between proxies and intrinsic goals. This gives rise to dysfunctional practices (for a transdisciplinary review, see Braganza, Reference Braganza2022) even without assuming some reinforcement effect.

In what follows, we illustrate this argument with three examples from the sociology of science – research evaluation, the political economy of publishing, and the third mission in academia. In all three cases, proxies fail in some way, but only the first case represents a clear-cut example of proxy failure in the sense of John et al. We take two main insights from these three examples: First, proxy failure as developed by the authors can improve and refine applied research. Nonetheless, many relevant instances of failing proxies fall outside the narrower definition of a proxy failure. This is because not every relevant dysfunctionality of a proxy is accompanied by a feedback loop that amplifies the distance between proxy and goal.

These insights also have repercussions for how broadly applicable proxy failure is and what role it plays relative to other causes of failing proxies – this facet could be more deeply explored in further research by scrutinizing examples for proxy failures as supplied in Table 1 in John et al. along the lines suggested here.

Our first example relates to proxy failure in the context of research evaluation driven by (the number of) citations. “Cheats in citation game” (Biagioli, Reference Biagioli2016) is an extremely well-documented phenomenon in academia and John et al. also refer to it as an example for a proxy failure. In this context, the universality of proxy failures can even be hypothesized to operate on different levels. A general example is given by the size bias that may arise in scientific evolution (Sterman & Wittenberg, Reference Sterman and Wittenberg1999): large and established research fields are more attractive than small (and potentially disruptive) ones, which further inflates the relative size of the former (see Aistleitner, Kapeller, & Steinerberger, Reference Aistleitner, Kapeller and Steinerberger2018). The underlying logic of preferential attachment then impacts and biases the distribution of citations, which in turn is used to evaluate researchers, departments, and journals, which, again, impacts visibility, thereby creating another feedback loop.

Our second example pertains to the political economy of scientific publishing, where profit, as a classic proxy measure of firm performance, seems particularly inadequate. Although public debate often conveys the impression that firms with high profits also show high performance, a more critical stance would also look for the actual sources of these profits. In this spirit, closer inspection suggests that profits as a proxy measure appear to be fundamentally ill-suited to the context of scientific publishers. These firms operate in an environment characterized by substantial indirect subsidies as well as monopoly rents derived from intellectual property rights and intrinsic motivations of researchers, who provide research manuscripts and peer-review services without financial reward. This combination results in disproportionately high profit margins ranging from 20 to 40%, which significantly surpasses profit margins achieved in other industries. In this context, the mismatch between proxy and underlying goals gives rise to “corrupted practices” (Braganza, Reference Braganza2022), that adversely affect the societal goal by appropriating a public good for private gain (Pühringer, Rath, & Griesebner, Reference Pühringer, Rath and Griesebner2021). This prime dysfunctionality is ex-ante unrelated to a proxy failure. However, such a failure can be reconstructed with reference to the trend toward concentration witnessed by the scientific publishing industry in recent years. Such increasing concentration could indeed map well on John et al.'s assertion of proxy failure by further pushing up profit rates. However, such a pattern is arguably more difficult to identify and not necessary for recognizing that profits may be an inherently misleading proxy for firm performance.

Our third and final example relates to public impact of science. The “third mission” in academia has taken an important role in research evaluation, which typically relies on proxies, such as the number of public appearances or the citations in policy documents. Here our main concern is that these proxies hardly assess whether the consequences of some public impact are conducive to the goal of the “third mission” (defined as tackling societal challenges). Eugenicists had a huge audience in the 1920s and the jury on those famous economists and political scientists helping to implement shock therapy after the fall of the Soviet Union is supposedly still out (Pistor, Reference Pistor2022). Although these examples suggest that a qualitative critique of proxies is inherently necessary and that proxy competition may be instrumentalized for political ideologies (Braganza, Reference Braganza2022), they do not directly speak to the narrower notion of a proxy failure. Nonetheless, similar to our second example, an argument along the lines of a proxy failure could be made. This would require the proxy to somehow deteriorate the quality of inputs from science to society, maybe because the incentive to receive public attention may cause scientists to be less careful or sensible in their public statements.

In concluding, we note that (potentially) failing proxies are anywhere, and the instance of this commentary in Behavioral and Brain Sciences (BBS) itself provides a compelling example. Evaluative metrics such as the Journal Impact Factor (JIF) typically count citations per article. Hence, if Web of Science were to classify comments like this one as “full articles,” publishing them would automatically depress the JIF of BBS. Hence, the (non)existence of open peer commentary in BBS may ultimately rather rely on some detail in the inner workings of the evaluation industry, than on its – undoubtedly intrinsic – merit for scientific advancement.

Financial support

Pühringer and Rath acknowledge support from the Austrian Science Fund (FWF, Grant Number ZK60-G27).

Competing interest

None.

References

Aistleitner, M., Kapeller, J., & Steinerberger, S. (2018). The power of scientometrics and the development of economics. Journal of Economic Issues, 52, 816834. doi:10.1080/00213624.2018.1498721CrossRefGoogle Scholar
Biagioli, M. (2016). Watch out for cheats in citation game. Nature News, 535, 201. doi:10.1038/535201aCrossRefGoogle ScholarPubMed
Braganza, O. (2022). Proxyeconomics, a theory and model of proxy-based competition and cultural evolution. Royal Society Open Science, 9, 211030. doi:10.1098/RSOS.211030CrossRefGoogle ScholarPubMed
Pühringer, S., Rath, J., & Griesebner, T. (2021). The political economy of academic publishing: On the commodification of a public good. PLoS ONE, 16(6), e0253226. doi:10.1371/journal.pone.0253226CrossRefGoogle Scholar
Sterman, J. D., & Wittenberg, J. (1999). Path dependence, competition, and succession in the dynamics of scientific revolution. Organization Science, 10, 322341. doi:10.1287/orsc.10.3.322CrossRefGoogle Scholar