Hostname: page-component-78c5997874-4rdpn Total loading time: 0 Render date: 2024-11-12T22:15:05.276Z Has data issue: false hasContentIssue false

Peer Reviewing in Political Science: New Survey Results

Published online by Cambridge University Press:  02 April 2015

Paul A. Djupe*
Affiliation:
Denison University
Rights & Permissions [Opens in a new window]

Abstract

Charges are frequently leveled that the peer-review system is broken, and reviewers are overburdened with requests. But this specific charge has been made in the absence of data about the actual reviewing loads of political scientists. I report the results of a recent survey asking a random sample of about 600 APSA members with PhDs what their reviewing loads are like and what their beliefs are about the value of peer reviewing to them and others. Article reviewing loads correspond to rank, institution, and scholarly productivity in predictable ways. At PhD-granting institutions, assistant professors averaged 5.5, associate professors averaged 7.0, and full professors averaged 8.3 in the past year; everyone else averaged just under 3 reviews a year. To recognize the value we place on peer reviewing, we need a system that collects data on who reviews and presents them in a format usable by scholars and their relevant evaluation bodies.

Type
The Profession
Copyright
Copyright © American Political Science Association 2015 

Many people have the sense that the peer-review system is severely stressed. Myriad problems are pitched at peer review—there are too many requests from editors, editors cannot get responses to invitations, some reviewers shirk after they have agreed to review, and authors do not like the feedback (e.g., Borer Reference Borer1997). For instance, writing in The Monkey Cage, Gelbach (Reference Gelbach2013) suggests, “The peer-review process, if not broken, is seriously under strain. Editors are forced to make hasty decisions based on imperfect signals from referees. Referees, in turn, are overburdened with review requests” in what Wilson (Reference Wilson2011) calls the “tragedy of the reviewer commons.” But are we overburdened by reviews? To begin an informed conversation about reviewing workloads, we need to answer two simple questions: How much do political scientists review? And who bears the burden of reviewership?

Problems with the peer-review system may be compounded if political scientists do not find value in the system or in their contribution to it. But the problem may be simpler if we value peer reviewing and the contributions we make through it, but simply do not feel recognized by relevant evaluation bodies for tenure and promotion. Therefore, another valuable addition to disciplinary conversations about peer reviewing would also address our beliefs regarding the value of peer reviewing to us and to institutional decision makers.

DESIGN AND DATA

In October 2013, a survey link was sent by e-mail to 3,002 APSA members randomly sampled from the membership with a PhD. Footnote 1 After three reminders (the survey was open for a month), 823 began the survey and 607 completed the instrument for a completion rate of 22.3% (not counting the 275 e-mails that bounced). The resulting sample reflects the gender balance of APSA members with a PhD (31.7 % female in the sample compared to 33.2% in the APSA population) and slightly overrepresents those in PhD-granting institutions (46.5% in the sample compared to 43.3% in APSA), but is not reflective in other ways. APSA members are 65% white whereas the sample is 82% white; 19% of APSA members are associate professors compared to 29% in the sample; 26% of APSA membership has attained the rank of full professor compared to 33% in the sample. In the following text, I use several schemes for dealing with the response bias. For descriptive statistics, I weight the data by race before presenting the statistic by rank. Otherwise, I include relevant demographics, including race, in models (i.e., “model based weighting”).

RESULTS

Although the survey was focused on reviewing for journals, figure 1 reports the diversity of outlets for peer reviewing in the discipline by rank. About 90% of all tenured or tenure-track faculty report reviewing for a journal in the past year. The 79% of non-tenure track faculty who report reviewing is significantly lower than the tenure track faculty, but so is the 87% figure for full professors compared to assistants. All other forms of peer review attract a smaller portion of those with a PhD, and full professors lead the way.

Figure 1 The Diversity of Engagement with Peer Reviewing in Political Science, by Rank (proportion checking each item)

Source: 2013 Peer Review Survey, data weighted. “Have you served as a peer reviewer for a journal, press, university, or granting agency in the past year? Please check all that apply.”

Among those who reviewed for journals, there is substantial diversity in the amount of reviewing that varies in predictable ways. Figure 2 shows a scatterplot of faculty at PhD-granting and non-PhD-granting universities at each rank with means and standard deviations shown. Reviewership is almost universally low except for faculty of higher rank at PhD-granting institutions. The averages among non-tenure track faculty and faculty at non-PhD-granting universities are statistically indistinguishable (2.5–3 reviews per year). Whereas the reviewer loads vary among faculty ranks within PhD-granting institutions, the differences are not large and the averages are not high. Assistant professors averaged 5.5 reviews, associate professors averaged 7.0 reviews, and full professors averaged 8.3 reviews in the past year. From these data, Footnote 2 it is difficult to conclude that the average PhD holder in political science is overrun with reviews.

Figure 2 Article Reviewing Across Rank and PhD Granting Status of the Institution

Source: 2013 Peer Review Survey, data weighted.

Note: The marker represents means and the capped lines represent one standard deviation in either direction. The Assistant, Associate, and Full differences in means by PhD granting status are significant at p<.01, while the Non-TT difference is significant at p<.1.

Predicting Reviewership

Reviewing is a function of being asked, and being asked is a function of reputation. We might hope that the standard route to expertise in a subfield is to publish and publish well. Each recent article should add to the likelihood of completing reviews. Publishing a book should also add incrementally to perceived expertise. Editors and their assistants notice who is getting published, but they also rely on the heuristic of the citation list of the piece under review. Moreover, publishing in a journal adds authors to the registry, and some editors ask for reviews before they let you out from under their decision thumb.

Publishing is not the only route to reputed expertise. Faculty who populate the university highways are more likely to be accessible compared to those on the byways. Simply being networked into the discipline should encourage reviews, for which the heuristic of being employed in a department with a PhD program is the proxy I use. Footnote 3 Rank rewards the productive scholar, especially in PhD-granting institutions, thus associate and full professors should have the highest reviewing load. As we have seen in figure 2, that link does not appear to hold for those outside PhD-granting institutions—rank is not related to reviewing rates. Of course, those who take on official roles with journals as editors or board members should expect to bear a higher burden.

Reviewing is a function of being asked, and being asked is a function of reputation.

In part because of the survey response bias, the model includes dummies for men and whites. If the discipline’s power structure is dominated by white men, then there may be a reputational bias in favor of white men, or there may be a skew induced by the (lack of) attention paid to work that women and non-whites tend to produce. Lastly, because reviewers have the choice to accept the request, I include an index of perceived reviewing efficacy, expecting that more efficacy is related to increased reviewership.

The estimates in table 1 from a negative binomial model confirm many of these expectations. Footnote 4 By far the dominant effect in the model is the role of journal publication. Each publication (in the past 3 years) adds .8 reviews, although it interacts with employment in a PhD-granting department. The relationship (see figure 3) shows the significant boost of about 3 reviews from being in a PhD program at low levels of publication. That gap decays to become indistinguishable at higher levels of exposure through publication. Publication of a book also boosts reviewing by .8, suggesting books have the reputational equivalent of an article. Footnote 5

Table 1 Negative Binomial Regression Estimates of Article Reviewing

Source: 2013 APSA Peer Review Survey.

Model Statistics: N=503, Pseudo R2=.12, LR test (α=0) p<.01

Figure 3 Predicted Review Counts Given the Number of Articles Published and PhD Granting Status of the Institution (estimates from Table 1); 95% confidence intervals

Source: 2013 Peer Review Survey.

Associate and assistant professors are indistinguishable from full professors (the excluded category), all else constant. Footnote 6 It is no surprise that editors and board members review more—about 3.4 reviews more (I am assuming that editors did not include decision making work for the journal). Women review no less than men, but whites review about 1.4 more pieces, although it is not especially clear why.

Accepting Reviewer Requests

While the digital age has vastly simplified the activities of everyone involved with journals, some evidence indicates that this complicates negotiations with reviewers. In other words, it is easier to ignore an e-mail solicitation than a mailed manuscript. The evidence is scattered and infrequently reported by editors. In 1988, the earliest instance (in JSTOR) when I can find the editors reporting such statistics, the American Political Science Review (APSR) received reviews from an enviable 83.3% of those solicited (Patterson, Ripley, and Trish Reference Patterson, Ripley and Trish1988, 911). The Journal of Politics (JOP) editorial report from 2009 (Leighley and Mishler Reference Leighley and Mishler2009, 3) suggests 56% of solicitations were accepted, although only three-quarters of these resulted in submitted reviews (for a 42% “response rate”). Their 2011 report indicated 66% of solicitations were accepted and again one-quarter failed to result in a review (50% response rate overall). American Journal of Political Science (AJPS) reported a 59% initial acceptance of the reviewer task in 2007–08 (Stewart Reference Stewart2008), although a non-specified “some” failed to report. The response rate in 2009 for the AJPS was 49% (Stewart Reference Stewart2009), 56% in 2010 (Wilson Reference Wilson2011), 57.4% in 2011 (Wilson Reference Wilson2012), and 61% in 2012 (Wilson Reference Wilson2013). During the past two years at Politics & Religion, we have had a 45% response rate. It would be useful for editors to report this statistic regularly to permit comparison.

The 70% average response rate reported in these data, therefore, looks like voter turnout estimates from the American National Election Study—it is probably too high. Yet it varies in reasonable ways by rank and institution as figure 4 shows. Assistant professors in non-PhD-granting institutions have the highest “uptake rate” (87%) and full professors at PhD- granting institutions have the lowest (65%). There is no significant difference overall between those in PhD-granting institutions (71%) and not (74%). Footnote 7 Nonetheless, there is marked variance within and across ranks. Twenty-eight percent of full professors accept less than half of the requests to review. Now, of course, the lower uptake rate could be a function of heavy reviewing loads, the prestige of journals making “the ask,” and other factors.

Figure 4 Review Uptake Rate by Rank and PhD Granting Status of the Institution

Source: 2013 Peer Review Survey, data weighted.

Note: The black markers represent means and the capped lines represent one standard deviation in either direction. Only the differences in means by PhD granting status among assistants is significant at p<.06 – neither of the other within-rank difference in means is significant. Across rank, the assistant to associate drop is significant in non-PhD granting institutions and the associate to full drop is significant within PhD granting institutions; both assistant to full drops are significant (p<.01).

Are higher uptake rates a function of being asked or saying yes? The evidence in table 2 suggests that the uptake rate is independent of the number of review requests received. Footnote 8 Instead, the uptake rate corresponds to institutional incentives and productivity. Those scholars who publish more articles (not books) accept reviewer requests more often—1% more for each article in the last 3 years. The untenured accept reviews at higher rates, too. There is some statistically marginal evidence that those scholars who feel efficacious about reviewing accept at higher rates.

Table 2 OLS Estimates of the Review Uptake Rate

Source: 2013 Peer Review Survey.

Model Statistics: N=465, Adj. R2=.07, RMSE=.24

This evidence is important because it suggests where the problem lies—full professors at PhD-granting institutions have published the most and are called to review the most (on average 21 times). Assistant professors accept at higher rates, but are asked less frequently (9 times a year at PhD-granting institutions and 4 times outside of PhD-granting institutions) and are more difficult to identify because they have published less. The problem appears not to be a function of being overrun with review requests, but rather a structural problem of searching beyond the usual suspects.

Beliefs about Peer Reviewing

From all of the complaints about reviewing, one might be lead to believe that it is a burden with little value. The respondents to this survey indicate otherwise. Figure 5 reports results from a number of items that bear on what value peer review (“PR” in figure 5) holds and for whom. Belief in the value of peer review is nearly unanimous (95% agree or strongly agree). Very few agree that peer review is rarely worth the time that can be better spent on other things, although a quarter are on the fence. Very large portions (80%) see peer reviewing as a way to keep up with current research. Of course, there is a potential dark side to being in the loop—the prospect of keeping research at the gate. A plurality (45%) agrees that gatekeeping is an important reason to peer review. There is suggestive evidence that those scholars in PhD programs who publish at very high rates are more likely to agree with the importance of gatekeeping, but it is not a statistically significant effect. Men are more likely to indicate the importance of gatekeeping (by about .3).

Figure 5 Beliefs about Peer Reviewing

Source: 2013 Peer Review Survey, data weighted.

Whereas most believe that being asked to peer review is a measure of professional stature (only 10% explicitly disagree), relatively few (38%) believe that they are recognized for the effort and a third explicitly disagree that they are recognized for it. It is important to realize that the amount of reviewing is not related to recognition (r = .04, p = .30), so faculty continue to review despite the lack of recognition. Perhaps most importantly, a plurality (41%) believes that reviewing is not counted toward tenure and promotion. This sentiment is related to the amount of reviewing, but in the inverse—individuals who do more reviewing are more likely to believe it is not rewarded. It is not surprising to learn that vast numbers of respondents (79%) believe that peer reviewing loads should count as service in tenure and promotion decisions.

It is not surprising to learn that vast numbers of respondents (79%) believe that peer reviewing loads should count as service in tenure and promotion decisions.

CONCLUSION

Peer reviewing makes the publishing world go around. Most scholars believe that peer review is valuable service to the discipline and perform these duties despite not being recognized for the effort. If it is valuable, then this time consuming, important process should count for something. Reviewing for presses is easy to accept: they pay. But reviewers for journals feel insufficiently recognized, although they review anyway and at essentially the same rate regardless of the number of times asked. The year-end list at the back of the print journal is marginal compensation. Does anyone actually find the year-end reviewer list now that many individuals access journal articles online and through archives?

One solution is to bring peer review into the light. We need a system that collects data from journals (and other institutions) on who reviews and presents them in a format usable by scholars and their relevant tenure/promotional bodies. Footnote 9 Such a system would provide verifiable data that overcome our suspicions of reviewing self-reports as cheap talk. Rather than the typical career list of journals reviewed for, yearly statistics could easily be added as a meaningful indicator in annual reports. Moreover, such data would enable universities to gain increased exposure from the extent of their faculty’s demonstrated reputation for expertise through peer reviewing.

If universities are going to reward faculty for their reviewing contributions, they need a sense for what is normative or at least common in our reviewing practices. These data provide one look at this question, indicating that it is common for professors at all ranks outside of PhD-granting institutions to perform about 2.5 reviews a year. For those in PhD granting institutions, reviews vary by rank and increase by about 1.5 reviews per rank from 5.5 for assistant professors to 8.3 for full professors. Only 10% of this sample is doing 1 review a month or more. It is possible that political scientists believe even this workload is too high, but from this look, reports of reviewing fatigue are coming from a highly selective set of faculty.

ACKNOWLEDGMENTS

I would like to thank former executive director of APSA, Michael Brintnall, for agreeing to support this initiative and Jennifer Diascro for a great deal of assistance and a critical eye in shaping and fielding the survey. I would also like to thank Mike Brady, Jennifer Hochschild, Polly Karpowicz, Scott McClurg, Dave Peterson, Anand Sokhey, Kaare Strom, and Jenny Wolak for their close reading of the survey instrument and thoughtful feedback. Thanks also to Mike Brady, Ryan Burge, Andy Lewis, and Jake Neiheisel for their thoughtful feedback about this article.

Footnotes

1. As of January 2013, APSA had 7,010 members with a PhD and who were short of emeriti status. I made the decision to exclude graduate students, so only half of PhD’s received the survey and I forgot to ask to exclude emeriti. Thus, before I received the data from APSA, internal records were used to remove them (several of whom completed the survey and generously emailed me a caution about the usefulness of their responses).

2. I truncated the variable at 30 reviews, which only collapsed 6 responses (out of 584) – a few in the 30s, a 46, a few in the 50s, and an eye-straining 115.

3. This decision gains support given the lack of variation among non-PhD granting institutions – the number of reviews across MA, BA, and non-BA granting institutions does not vary significantly (not shown).

4. I also tested whether the performance of other kinds of reviews (such as tenure reviews) affected journal reviewing and found no effect.

5. There are surely differential effects by the prestige of the press but this survey did not capture that level of nuance, nor the field of the political scientist, which may affect reviewing rates given the centrality of articles.

6. There is some suggestive (not sufficiently statistically crisp) evidence that differences in beliefs among assistants are not related to peer reviewing or review uptake, whereas those beliefs distinguish these rates among associate and full professors. Assistants may feel that they have to say yes to gain status, please editors, and get tenure.

7. There are no significant differences among those at academic, non-PhD granting institutions as well.

8. Their perceived uptake rate compares favorably (r=.48) with a computed figure from the reported number of reviews divided by the number of requests.

9. This capability is forthcoming through ORCID (see http://orcid.org).

References

REFERENCES

Borer, Douglas A. 1997. “The Ugly Process of Journal Submissions: A Call for Reform.” PS: Political Science and Politics 30 (3): 558–60.Google Scholar
Gelbach, Scott. 2013. “A Modest Proposal to Improve the Peer Review Process.” Available at http://themonkeycage.org/2013/08/27/a-modest-proposal-to-improve-the-peer-review-process/ (accessed December 18, 2013).Google Scholar
Leighley, Jan E., and Mishler, William. 2009. “The Journal of Politics Annual Report.” Presented to the Southern Political Science Association.Google Scholar
Patterson, Samuel C., Ripley, Brian D., and Trish, Barbara. 1988. “The American Political Science Review: A Retrospective of Last Year and the Last Eight Decades.” PS: Political Science and Politics 21 (4): 908–25.Google Scholar
Stewart, Marianne C. 2008. “Report of the Editor to the Editorial Board of The American Journal of Political Science and to the Executive Council of The Midwest Political Science Association.” Available at http://ajpsblogging.files.wordpress.com/2013/08/ajps-report-2008.pdf (accessed December 19, 2013).Google Scholar
Stewart, Marianne C. 2009. “Report of the Editor to the Editorial Board of The American Journal of Political Science and to the Executive Council of The Midwest Political Science Association.” Available at http://ajpsblogging.files.wordpress.com/2013/08/ajps-report-2009.pdf (accessed December 19, 2013).Google Scholar
Wilson, Rick K. 2011. “Report of the Editor to the Editorial Board of The American Journal of Political Science and to the Executive Council of The Midwest Political Science Association.” Available at http://ajpsblogging.files.wordpress.com/2013/08/ajps-report-2011.pdf (accessed December 19, 2013).Google Scholar
Wilson, Rick K. 2012. “Report of the Editor to the Editorial Board of The American Journal of Political Science and to the Executive Council of The Midwest Political Science Association.” Available at http://ajpsblogging.files.wordpress.com/2013/08/ajps-report-2012.pdf (accessed December 19, 2013).Google Scholar
Wilson, Rick K. 2013. “Report of the Editor to the Editorial Board of The American Journal of Political Science and to the Executive Council of The Midwest Political Science Association.” Available at http://ajpsblogging.files.wordpress.com/2013/08/ajps-report-2013.pdf (accessed December 19, 2013).Google Scholar
Figure 0

Figure 1 The Diversity of Engagement with Peer Reviewing in Political Science, by Rank (proportion checking each item)Source: 2013 Peer Review Survey, data weighted. “Have you served as a peer reviewer for a journal, press, university, or granting agency in the past year? Please check all that apply.”

Figure 1

Figure 2 Article Reviewing Across Rank and PhD Granting Status of the InstitutionSource: 2013 Peer Review Survey, data weighted.Note: The marker represents means and the capped lines represent one standard deviation in either direction. The Assistant, Associate, and Full differences in means by PhD granting status are significant at p<.01, while the Non-TT difference is significant at p<.1.

Figure 2

Table 1 Negative Binomial Regression Estimates of Article Reviewing

Figure 3

Figure 3 Predicted Review Counts Given the Number of Articles Published and PhD Granting Status of the Institution (estimates from Table 1); 95% confidence intervalsSource: 2013 Peer Review Survey.

Figure 4

Figure 4 Review Uptake Rate by Rank and PhD Granting Status of the InstitutionSource: 2013 Peer Review Survey, data weighted.Note: The black markers represent means and the capped lines represent one standard deviation in either direction. Only the differences in means by PhD granting status among assistants is significant at p<.06 – neither of the other within-rank difference in means is significant. Across rank, the assistant to associate drop is significant in non-PhD granting institutions and the associate to full drop is significant within PhD granting institutions; both assistant to full drops are significant (p<.01).

Figure 5

Table 2 OLS Estimates of the Review Uptake Rate

Figure 6

Figure 5 Beliefs about Peer ReviewingSource: 2013 Peer Review Survey, data weighted.

Supplementary material: File

Djupe supplementary material

Djupe supplementary material 1

Download Djupe supplementary material(File)
File 15.4 KB