Hostname: page-component-586b7cd67f-vdxz6 Total loading time: 0 Render date: 2024-11-24T00:35:24.885Z Has data issue: false hasContentIssue false

The Price of Science: MOR's Organization Behavior Editorial Area

Published online by Cambridge University Press:  04 March 2019

Xu Huang*
Affiliation:
Hong Kong Baptist University, China
Rights & Permissions [Opens in a new window]

Abstract

Type
Editorial Statements
Copyright
Copyright © The International Association for Chinese Management Research 2019 

I am honored to join the editorial team of Management and Organization Review (MOR) as the Deputy Editor for the organizational behavior (OB) area. MOR has established its reputation as a leading outlet for China research. In the last five years, under the leadership of Professor Arie Lewin, MOR has attracted researchers from other transforming economies and become a platform of scholarly conversations for testing and extending existing theories and exploring new theories in China, as well as other emerging economies. More recently, MOR has joined many other journals to spearhead initiatives revolutionizing the review process to improve the rigor and reproducibility of our science (Lewin et al., Reference Lewin, Chiu, Fey, Levine, McDermott, Murmann and Tsang2016). I concur with this strategic move and would like to take this opportunity to outline my humble views on the rigor of management research and my critique of current research practices.

Over the last two decades, the International Association for Chinese Management Research (IACMR) and MOR have led the transformation of management research culture in China, contributing to a tremendous improvement in the quality of China's management research and nurturing many talented young researchers. During this period, leading business schools in China have established US-style tenure systems to reward scholars who publish in respectable international journals according to a journal list. A more ‘progressive’ practice of many business schools is to attach a price tag to journals according to their ranking in the journal list and offer monetary rewards to scholars who publish in these journals. Science, then, has a price. These changes coincide with the growing popularity of workshops about ‘how to publish in top journals’ at various academic events and conferences throughout the country. Engaging in scientific endeavors has become an instrumental path to achieving a ‘good life’. In this inaugural editorial essay, I question this trend of the monetization of scientific work and reiterate the MOR editorial team's commitment to promoting and safeguarding priceless ‘good science’.

Why are many junior faculty and PhD students flocking to workshops of ‘how to publish in top-tier journals’? The phrase of ‘how to publish in top-tier journals’ is simply too seductive and costly to resist, because scientific output has a price. Here, I do not want to devalue these workshops, which have been proven to be effective in upgrading the skills of those performing high-quality research. Instead, I suggest that more workshops and PhD education programs should place equal emphasis on educating our junior researchers about ‘how to do good science’. In my view, top-tier journal publications are the natural outcomes of good science. Good science helps us better ‘understand, explain, and predict the world we live in’ through applying rigorous scientific methods (Okasha, 2016). In reality, once our paper is published in a good journal and a new item is added to our CV, the mission is accomplished. How often do we consider whether our research findings can help people better understand, explain, and predict the phenomena we are investigating? How much do we care about the scientific truth of our findings and the reproducibility of our models? Not many researchers care much about it, including myself at the early stage of my career. This is because research programs in most business schools attach more value to the price of science than the virtue of science. Publishing in top-tier journals, rather than finding the truth, becomes the purpose of scientific research. This mentality, at least in part, led to dodgy results sneaking into the literature, causing the reproducibility crisis widely documented in both the academic literature (e.g., Lewin et al., Reference Lewin, Chiu, Fey, Levine, McDermott, Murmann and Tsang2016) and popular press (The Economist, 2016).

I am a regular reader of The Economist, a popular and influential magazine (though its editors keep emphasizing that it's a newspaper). Every year in the last five years, I have read one or two articles published in The Economist grumbling (often in a cynical tone) about the reproducibility crisis of the sciences in general, and the social sciences in particular. In a recent article, the author commented that exciting results ‘from a scientific study are in effect meaningless if they cannot be replicated’, and yet not many scientists care about that (The Economist, 2018: 66). The author went on to depict a new way to give a price to scientific output to enhance the reproducibility of scientific research: establish a ‘stock market of scientific reproducibility’. This idea is based on a study published in the Proceedings of the National Academy of Sciences (PANS, Dreber et al., Reference Dreber, Pfeffer, Almenberg, Isaksson, Wilson, Chen, Nosek and Johannesson2015).

Specifically, Dreber et al. (Reference Dreber, Pfeffer, Almenberg, Isaksson, Wilson, Chen, Nosek and Johannesson2015) selected 44 papers published in prominent psychology journals and recruited 92 scholars through an online platform to ‘invest’ on the reproducibility of the 44 papers in a ‘stock market of scientific reproducibility’, or in their term, a ‘prediction market’. Instead of giving a price tag to the journals where the papers were published, in the prediction market, the 92 participating researchers bet on the reproducibility of the papers using real money. Each participant was instructed to read the papers and was given $100 to trade the papers in an online prediction market, where they could buy and sell the papers based on their estimated reproducibility. After 2,496 transactions, some papers gained much higher ‘market values’ than others did. Then, the researchers conducted experiments to try to replicate the findings of the 44 papers. They successfully conducted experiments based on 41 of the 44 papers (with three experiments being delayed). Among the 41 studies, the authors could replicate 16 (39%) studies and failed to replicate 25 (61%) studies. More surprisingly, the authors reported that ‘the prediction markets correctly predict[ed] the outcomes of 71% of the replications’ (Dreber et al., Reference Dreber, Pfeffer, Almenberg, Isaksson, Wilson, Chen, Nosek and Johannesson2015: 15344). This finding suggests that the scientific community, as a whole, has a good sense of which studies are replicable. The Economist (2018: 66) concluded (rather sarcastically) that ‘[p]erhaps, then, there is a market opportunity in testing scientific results’. Clearly, if the scientific community is able to more correctly assess the reproducibility of scientific studies in a virtual stock market, the prevalence of ‘junk science’ in our literature is likely to be caused by the current review system of scholarly journals, which may not be effective in weeding out studies that are not reproducible and reliable.

Certainly, a stock market of scientific reproducibility is in the distant future, if at all. In the meantime, the editorial teams of journals can do more to assess the reproducibility of empirical studies and more importantly, to safeguard the scientific rigor of studies using diverse methodologies. To this end, MOR's new review process places more emphasis on studies’ data transparency, the robustness of findings, treatment of outliers and null findings, and so forth. We also encourage authors to openly share their data and research materials and have launched new practices of preregistration and preapproval (for details, please refer to the MOR online ‘Editorial Statement and Reviewing Policies’). The new editorial team for the OB area will take part in this revolution of the review process not only for the quality of this journal, but for the development of scientific rigor in our field as whole.

Lastly, I would like thank the departing Senior Editors of the OB area, including Chao Chen, Zhen Xiong Chen, Ray Friedman, and Jia Lin Xie, for their contributions to MOR. The new Senior Editor team includes Roy Chua (Singapore Management University) and Zhi-Xue Zhang (Peking University), who served on the previous OB team and have agreed to continue. We have also appointed five new Senior Editors, including Jasmine Hu (Ohio State University), Ning Li (University of Iowa), Jian Liang (Tongji University), Wu Liu (The Hong Kong Polytechnic University), and Li Ma (Peking University). This new team is looking forward to working with management scholars to produce priceless ‘good science’.

References

REFERENCES

Dreber, A., Pfeffer, T., Almenberg, J., Isaksson, S., Wilson, B., Chen, Y., Nosek, B. A., & Johannesson, M. 2015. Using prediction markets to estimate the reproducibility of scientific research. Proceedings of the National Academy of Science (PNAS), 112(50): 1534315347.Google Scholar
Lewin, A. Y., Chiu, C., Fey, C. F., Levine, S. S., McDermott, G., Murmann, J. P., & Tsang, E. 2016. The critique of empirical social science: New policies at Management and Organization Review. Management and Organization Review, 12(4): 649658.Google Scholar
Okasha, S. 2002. Philosophy of science: A very short introduction. New York: Oxford University Press.Google Scholar
The Economist. 2016. A far from dismal outcome: Microeconomists’ claims to be doing real science turn out to be true. March 5th–12th: 67–68.Google Scholar
The Economist. 2018. Betting on the result: Experts are good at figuring out which experiments can be replicated. September 1st–8th: 66.Google Scholar