Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-7cvxr Total loading time: 0 Render date: 2024-12-27T21:14:05.885Z Has data issue: false hasContentIssue false

Part III - Your Research/Academic Career

Published online by Cambridge University Press:  21 July 2022

Mitchell J. Prinstein
Affiliation:
University of North Carolina, Chapel Hill
Type
Chapter
Information
The Portable Mentor
Expert Guide to a Successful Career in Psychology
, pp. 195 - 326
Publisher: Cambridge University Press
Print publication year: 2022
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

11 An Open Science Workflow for More Credible, Rigorous Research

Katherine S. Corker

Recent years have heralded a relatively tumultuous time in the history of psychological science. The past decade saw the publication of a landmark paper that attempted to replicate 100 studies and estimated that just 39 percent of studies published in top psychology journals were replicable (Open Science Collaboration, 2015). There was also a surplus of studies failing to replicate high-profile effects that had long been taken as fact (e.g., Reference Hagger, Chatzisarantis, Alberts, Anggono, Batailler, Birt, Brand, Brandt, Brewer, Bruyneel, Calvillo, Campbell, Cannon, Carlucci, Carruth, Cheung, Crowell, De Ridder, Dewitte and ZwienenbergHagger et al., 2016; Reference Harris, Coburn, Rohrer and PashlerHarris et al., 2013; Reference Wagenmakers, Beek, Dijkhoff, Gronau, Acosta, Adams, Albohn, Allard, Benning, Blouin-Hudon, Bulnes, Caldwell, Calin-Jageman, Capaldi, Carfagno, Chasten, Cleeremans, Connell, DeCicco and ZwaanWagenmakers et al., 2016). Taken together, suddenly, the foundations of much psychological research seemed very shaky.

As with similar evidence in other scientific fields (e.g., biomedicine, criminology), these findings have led to a collective soul-searching dubbed the “replication crisis” or the “credibility revolution” (Reference Nelson, Simmons and SimonsohnNelson et al., 2018; Reference VazireVazire, 2018). Clearly, something about the way scientists had gone about their work in the past wasn’t effective at uncovering replicable findings, and changes were badly needed. An impressive collection of meta-scientific studies (i.e., studies about scientists and scientific practices) have revealed major shortcomings in standard research and statistical methods (e.g., Reference Button, Ioannidis, Mokrysz, Nosek, Flint, Robinson and MunafòButton et al., 2013; Reference John, Loewenstein and PrelecJohn et al., 2012; Reference Nuijten, Hartgerink, Van Assen, Epskamp and WichertsNuijten et al., 2016; Reference Simmons, Nelson and SimonsohnSimmons et al., 2011). These studies point to a clear way to improve not only replicability but also the accuracy of scientific conclusions: open science.

Open science refers to a radically transparent approach to the research process. “Open” refers to sharing – making accessible – parts of the research process that have traditionally been known only to an individual researcher or research team. In a standard research article, authors summarize their research methods and their findings, leaving out many details along the way. Among other things, open science includes sharing research materials (protocols) in full, making data and analysis code publicly available, and pre-registering (i.e., making plans public) study designs, hypotheses, and analysis plans.

Psychology has previously gone through periods of unrest similar to the 2010s, with methodologists and statisticians making persuasive pleas for more transparency and rigor in research (e.g., Reference BakanBakan, 1966; Reference CohenCohen, 1994; Reference KerrKerr, 1998; Reference MeehlMeehl, 1978). Yet, it is only now with improvements in technology and research infrastructure, together with concerted efforts in journals and scientific societies by reformers, that changes have begun to stick (Reference SpellmanSpellman, 2015).

Training in open science practices is now a required part of becoming a research psychologist. The goal of this chapter is to briefly review the shortcomings in scientific practice that open science practices address and then to give a more detailed account of open science itself. We’ll consider what it means to work openly and offer pragmatic advice for getting started.

1. Why Open Science?

When introducing new researchers to the idea of open science, the need for such practices seems obvious and self-evident. Doesn’t being a scientist logically imply an obligation to transparently show one’s work and subject it to rigorous scrutiny? Yet, abundant evidence reveals that researchers have not historically lived up to this ideal and that the failure to do transparent, rigorous work has hindered scientific progress.

1.1 Old Habits Die Hard

Several factors in the past combined to create conditions that encouraged researchers to avoid open science practices. First, incentives in academic contexts have not historically rewarded such behaviors and, in some cases, may have actually punished them (Reference Smaldino and McElreathSmaldino & McElreath, 2016). To get ahead in an academic career, publications are the coin of the realm, and jobs, promotions, and accolades can sometimes be awarded based on number of publications, rather than publication quality.

Second, human biases conspire to fool us into thinking we have discovered something when we actually have not (Reference BishopBishop, 2020). For instance, confirmation bias allows us to selectively interpret results in ways that support our pre-existing beliefs or theories, which may be flawed. Self-serving biases might cause defensive reactions when critics point out errors in our methods or conclusions. Adopting open science practices can expose researchers to cognitive discomfort (e.g., pre-existing beliefs are challenged; higher levels of transparency mean that critics are given ammunition), which we might naturally seek to avoid.

Finally, psychology uses an apprenticeship model of researcher training, which means that the practices of new researchers might only be as good as the practices of the more senior academics training them. When questionable research practices are taught as normative by research mentors, higher-quality open science practices might be dismissed as methodological pedantry.

Given the abundant evidence of flaws in psychology’s collective body of knowledge, we now know how important it is to overcome the hurdles described here and transition to a higher standard of practice. Incentives are changing, and open science practices are becoming the norm at many journals (Reference Nosek, Alter, Banks, Borsboom, Bowman, Breckler, Buck, Chambers, Chin, Christensen, Contestabile, Dafoe, Eich, Freese, Glennerster, Goroff, Green, Hesse, Humphreys and YarkoniNosek et al., 2015). A new generation of researchers is being trained to employ more rigorous practices. And although the cognitive biases just discussed might be some of the toughest problems to overcome, greater levels of transparency in the publishing process help fortify the ability of the peer review process to serve as a check on researcher biases.

1.2 Benefits of Open Science Practices

A number of benefits of open science practices are worth emphasizing. First, increases in transparency make it possible for errors to be detected and for science to self-correct. The self-correcting nature of science is often heralded as a key feature that distinguishes scientific approaches from other ways of knowing. Yet, self-correction is difficult, if not impossible, when details of research are routinely withheld (Reference Vazire and HolcombeVazire & Holcombe, 2020).

Second, openly sharing research materials (protocols), analysis code, and data provides new opportunities to extend upon research and adds value above and beyond what a single study would add. For example, future researchers can more easily replicate a study’s methods if they have access to a full protocol and materials; secondary data analysts and meta-analysts can perform novel analyses on raw data if they are shared.

Third, collaborative work becomes easier when teams employ the careful documentation that is well-honed for followers of open science practices. Even massive collaborations across time and location become possible when research materials and data are shared following similar standards (Reference Moshontz, Campbell, Ebersole, IJzerman, Urry, Forscher and ChartierMoshontz et al., 2018).

Finally, the benefits of open science practices accrue not only to the field at large, but also to individual researchers. Working openly provides a tangible record of your contributions as a researcher, which may be useful when it comes to applying for funding, awards, or jobs.

Reference MarkowetzMarkowetz (2015) describes five “selfish” reasons to work reproducibly, chiefly: (a) to avoid “disaster” (i.e., major errors), (b) because it’s easier, (c) to smooth the peer review process, (d) to allow others to build on your work, and (e) to build your reputation. Likewise, Reference McKiernan, Bourne, Brown, Buck, Kenall, Lin, McDougall, Nosek, Ram, Soderberg, Spies, Thaney, Updegrove, Woo and YarkoniMcKiernan et al. (2016) review the ample evidence that articles that feature open science practices tend to be more cited, more discussed in the media, attract more funding and job offers, and are associated with having a larger network of collaborators. Reference Allen and MehlerAllen and Mehler (2019) review benefits (along with challenges) specifically for early career researchers.

All of this is not to say that there are not costs or downsides to some of the practices discussed here. For one thing, learning and implementing new techniques takes time, although experience shows that you’ll become faster and more efficient with practice. Additionally, unsupportive research mentors or other senior collaborators can make it challenging to embrace open science practices. The power dynamics in such relationships may mean that there is little flexibility in the practices that early career researchers can employ. Trying to propose new techniques can be stressful and might strain advisor-advisee relationships, but see Reference Kathawalla, Silverstein and SyedKathawalla et al. (2021) for rebuttals to these issues and other common worries.

In spite of these persistent challenges and the old pressures working against the adoption of open science practices, I hope to convince you that the benefits of working openly are numerous – both to the field and to individual researchers. As a testament to changing norms and incentives, open science practices are spreading and taking hold in psychology (Reference Christensen, Freese and MiguelChristensen, Freese, et al., 2019; Reference Tenney, Costa, Allard and VazireTenney et al., 2021). Let us consider in more detail what we actually mean by open science practices.

2. Planning Your Research

Many open science practices boil down to forming or changing your work habits so that more parts of your work are available to be observed by others. But like other healthy habits (eating healthy food, exercising), open science practices may take some initial effort to put into place. You may also find that what works well for others doesn’t work well for you, and it may take some trial and error to arrive at a workflow that is both effective and sustainable. However, the benefits that you’ll reap from establishing these habits – both immediate and delayed – are well worth putting in the effort. It may not seem like it, but there is no better time in your career to begin than now.

Likewise, you may find that many open science practices are most easily implemented early in the research process, during the planning stages. But fear not: if a project is already underway, we’ll consider ways to add transparency to the research process at later stages as well. Here, we’ll discuss using the Open Science Framework (https://osf.io), along with pre-registration and registered reports, as you plan your research.

2.1 Managing the Open Science Workflow: The Open Science Framework

The Open Science Framework (OSF; https://osf.io) is a powerful research management tool. Using a tool like OSF allows you to organize all stages of the research process in one location, which can help you stay organized. Using OSF is also not tied to any specific academic institution, so you won’t have to worry about transferring your work when you inevitably change jobs (perhaps several times). Other tools exist that can do many of the things OSF can (some researchers like to use GitHub, figshare, or Zenodo, for instance), but OSF was specifically created for managing scientific research and has a number of features that make it uniquely suited for the task. OSF’s core functions include (but are not limited to) long-term archival of research materials, analysis code, and data; a flexible but robust pre-registration tool; and support for collaborative workflow management. Later in the chapter, we’ll discuss the ins and outs of each of these practices, but here I want to review a few of the ways that OSF is specialized for these functions.

The main unit of work on OSF is the “project.” Each project has a stable URL and the potential to create an associated digital object identifier (DOI). This means that researchers can make reference to OSF project pages in their research articles without worry that links will cease to function or shared content will become unavailable. A sizable preservation fund promises that content shared on OSF will remain available for at least 50 years, even if the service should cease to operate. This stability makes OSF well-suited to host part of the scientific record.

A key feature of projects is that they can be made public (accessible to all) or private (accessible only to contributors). This feature allows you to share your work publicly when you are ready, whether that is immediately or only after a project is complete. Another feature is that projects can be shared using “view-only” links. These links have the option to remove contributor names to enable the materials shared in a project to be accessible to peer reviewers at journals that use masked review.

Projects can have any number of contributors, making it possible to easily work collaboratively even with a large team. An activity tracker gives a detailed and complete account of changes to the project (e.g., adding or removing a file, editing the project wiki page), so you always know who did what, and when, within a project. Another benefit is the ability to easily connect OSF to other tools (e.g., Google drive, GitHub) to further enhance OSF’s capabilities.

Within projects, it is possible to create nested “components.” Components have their own URLs, DOIs, privacy settings, and contributor list. It is possible, for instance, to create a component within a project and to restrict access to that component alone while making the rest of the project publicly accessible. If particular parts of a project are sensitive or confidential, components can be a useful way to maintain the privacy of that information. Similarly, perhaps it is necessary for part of a research group to have access to parts of a research project and for others to not have the access. Components allow researchers this fine-grained level of control.

Finally, OSF’s pre-registration function allows projects and components to be “frozen” (i.e., saved as time-stamped copies that cannot be edited). Researchers can opt to pre-register their projects using one of many templates, or they can simply upload the narrative text of their research plans. In this way, researchers and editors can be confident about which elements of a study were pre-specified and which were informed by the research process or outcomes.

The review of OSF’s features here is necessarily brief. Reference SoderbergSoderberg (2018) provides a step-by-step guide for getting started with OSF. Tutorials are also available on the Center for Open Science’s YouTube channel. I recommend selecting a project – perhaps one for which you are the lead contributor – to try out OSF and get familiar with its features in greater detail. Later, you may want to consider using a project template, like the one that I use in my lab (Reference CorkerCorker, 2016), to standardize the appearance and organization of your OSF projects.

2.2 Pre-Registration and Registered Reports

Learning about how to pre-register research involves much more than just learning how to use a particular tool (like OSF) to complete the registration process. Like other research methods, training and practice are needed to become skilled at this key open science technique (Reference Tackett, Brandes, Dworak and ShieldsTackett et al., 2020). Pre-registration refers to publicly reporting study designs, hypotheses, and/or analysis plans prior to the onset of a research project. Additionally, the pre-registered plan should be shared in an accessible repository, and it should be “read-only” (i.e., not editable after posting). As we’ll see, there are several reasons a researcher might choose to pre-register, along with a variety of benefits of doing so, but the most basic function of the practice is that pre-registration clearly delineates the parts of a research project that were specified before the onset of a project from those parts that were decided on along the way or based on observed data.

Depending on their goals, researchers might pre-register for different reasons (Reference Da Silva Frost and Ledgerwoodda Silva Frost & Ledgerwood, 2020; Reference LedgerwoodLedgerwood, 2018; Reference NavarroNavarro, 2019). First, researchers may want to constrain particular data analytic choices prior to encountering the data. Doing so makes it clear to the researchers, and to readers, that the presented analysis is not merely the one most favorable to the authors’ predictions, nor the one with the lowest p-value. Second, researchers might desire to specify theoretical predictions prior to encountering a result. In so doing, they set up conditions that enable a strong test of the theory, including the possibility for falsification of alternative hypotheses (Reference PlattPlatt, 1964). Third, researchers may seek to increase the transparency of their research process, documenting particular plans and, crucially, when those plans were made. In addition to the scientific benefits of transparency, pre-registration can also facilitate more detailed planning than usual, potentially increasing research quality as potential pitfalls are caught early enough to be remedied.

Some of these reasons are more applicable to certain types of research than others, but nearly all research can benefit from some form of pre-registration. For instance, some research is descriptive and does not test hypotheses stemming from a theory. Other research might feature few or no statistical analyses. The theory testing or analytic constraint functions of pre-registration might not be applicable in these instances. However, the benefits of increased transparency and enhanced planning stand to benefit many kinds of research (but see Reference Devezer, Navarro, Vandekerckhove and BuzbasDevezer et al., 2021, for a critical take on the value of pre-registration).

A related but distinct practice is Registered Reports (Reference ChambersChambers, 2013). In a registered report, authors submit a study proposal – usually as a manuscript consisting of a complete introduction, proposed method, and proposed analysis section – to a journal that offers the format. The manuscript (known at that point as “stage 1”) is then peer-reviewed, after which it can be rejected, accepted, or receive a revise and resubmit. Crucially, once the stage 1 manuscript is accepted (most likely after revision following peer review), the journal agrees to publish the final paper regardless of the statistical significance of results, provided the agreed upon plan has been followed – a phase of publication known as “in-principle acceptance.” Once results are in, the paper (at this point known as a “stage 2” manuscript) goes out again for peer review to verify that the study was executed as agreed.

When stage 1 proposals are published (either as stand-alone manuscripts or as supplements to the final stage 2 manuscripts), registered reports allow readers to confirm which parts of a study have been planned ahead of time, just like ordinary pre-registrations. Likewise, registered reports limit strategic analytic flexibility, allow strong tests of hypotheses, and increase the transparency of research. Crucially, however, registered reports also address publication bias, because papers are not accepted or rejected on the basis of the outcome of the research. Furthermore, the two-stage peer-review process has an even greater potential to improve study quality, because researchers receive the benefit of peer critique during the design phase of a study when there is still time to correct flaws. Finally, because the publication process is overseen by an editor, undisclosed deviations from the pre-registered plan may be less likely to occur than they are with unreviewed pre-registration. Pragmatically, registered reports might be especially worthwhile in contentious areas of study where it is useful to jointly agree on a critical test ahead of time with peer critics. Authors can also enjoy the promise of acceptance of the final product prior to investing resources in data collection.

Table 11.1 lists guidance and templates that have been developed across different subfields and research methods to enable nearly any study to be pre-registered. A final conceptual distinction is worth brief mention. Pre-registrations are documentation of researchers’ plans for their studies (in systematic reviews of health research, these documents are known as protocols). When catalogued and searchable, pre-registrations form a registry. In the United States, the most common study registry is clinicaltrials.gov, because the National Institutes of Health requires studies that it funds to be registered there. PROSPERO (Reference Page, Shamseer and TriccoPage et al., 2018) is the main registry for health-related systematic reviews. Entries in clinicaltrials.gov and PROSPERO must follow a particular format, and adhering to that format may or may not fulfill researchers’ pre-registration goals (for analytic constraint, for hypothesis testing, or for increasing transparency). For instance, when registering a study in clinicaltrials.gov, researchers must declare their primary outcomes (i.e., dependent variables) and distinguish them from secondary outcomes, but they are not required to submit a detailed analysis plan. A major benefit of study registries is to track the existence of studies independent of final publications. Registries also allow the detection of questionable research practices like outcome switching (e.g., Reference Goldacre, Drysdale, Dale, Milosevic, Slade, Hartley, Marston, Powell-Smith, Heneghan and MahtaniGoldacre et al., 2019). However, entries in clinicaltrials.gov and PROSPERO fall short in many ways when it comes to achieving the various goals of pre-registration discussed above. It is important to distinguish brief registry entries from more detailed pre-registrations and protocols.

Table 11.1 Guides and templates for pre-registration

Method/subfieldSource
Clinical scienceReference Benning, Bachrach, Smith, Freeman and WrightBenning et al. (2019)
Cognitive modeling applicationReference Crüwell and EvansCrüwell & Evans (2020)
Developmental cognitive neuroscienceReference Flourney, Vijayakumar, Cheng, Cosme, Flannery and PfeiferFlourney et al. (2020)
EEG/ERPReference Paul, Govaart and SchettinoPaul et al. (2021)
Experience samplingReference Kirtley, Lafit, Achterhof, Hiekkaranta and Myin-GermeysKirtley et al. (2021)
Experimental social psychologyReference van ’t Veer and Giner-Sorollavan ’t Veer & Giner-Sorolla (2016)
Exploratory researchDirnagl (2020)
fMRIReference FlanneryFlannery (2020)
Infant researchReference Havron, Bergmann and TsujiHavron et al. (2020)
Intervention researchReference Moreau and WiebelsMoreau & Wiebels (2021)
LinguisticsReference RoettgerRoettger (2021); Reference Mertzen, Lago and VasishthMertzen et al. (2021)
PsychopathologyReference Krypotos, Klugkist, Mertens and EngelhardKrypotos et al. (2019)
Qualitative researchReference Haven and Van GrootelHaven & Van Grootel (2019); Reference Haven, Errington, Gleditsch, van Grootel, Jacobs, Kern, Piñeiro, Rosenblatt and MokkinkHaven et al. (2020)
Quantitative researchReference Bosnjak, Fiebach, Mellor, Mueller, O’Connor, Oswald and Sokol-ChangBosnjak et al. (2021)
Replication researchReference Brandt, IJzerman, Dijksterhuis, Farach, Geller, Giner-Sorolla, Grange, Perugini, Spies and van ’t VeerBrandt et al. (2014)
Secondary data analysisReference Weston, Ritchie, Rohrer and PrzybylskiWeston et al. (2019); Reference Mertens and KrypotosMertens & Krypotos (2019); Reference Van den Akker, Weston, Campbell, Chopik, Damian, Davis-Kean, Hall, Kosie, Kruse, Olsen, Ritchie, Valentine, van ’t Veer and BakkerVan den Akker et al. (2021)
Single-case designReference Johnson and CookJohnson & Cook (2019)
Systematic review (general)Reference Van den Akker, Peters, Bakker, Carlsson, Coles, Corker, Feldman, Mellor, Moreau, Nordström, Pfeiffer, Pickering, Riegelman, Topor, van Veggel and YeungVan den Akker et al. (2020)
Systematic review and meta-analysis protocols (PRISMA-P)Reference Moher, Shamseer, Clarke, Ghersi, Liberati, Petticrew, Shekelle and StewartMoher et al. (2015); Reference Shamseer, Moher, Clarke, Ghersi, Liberati, Petticrew, Shekelle, Stewart and GroupShamseer et al. (2015)
Systematic review (non-interventional)Reference Topor, Pickering, Barbosa Mendes, Bishop, Büttner, Elsherif, Evans, Henderson, Kalandadze, Nitschke, Staaks, van den Akker, Yeung, Zaneva, Lam, Madan, Moreau, O’Mahony, Parker and WestwoodTopor et al. (2021)
3. Doing the Research

Open science considerations are as relevant when you are actually conducting your research as they are when you are planning it. One of the things you have surely already learned in your graduate training is that research projects often take a long time to complete. It may be several months, or perhaps even longer, after you have planned a study and collected the data before you are actually finalizing a manuscript to submit for publication. Even once an initial draft is completed, you will again have a lengthy wait while the paper is reviewed, after which time you will invariably have to return to the project for revisions. To make matters worse, as your career unfolds, you will begin to juggle multiple such projects simultaneously. Put briefly: you need a robust system of documentation to keep track of these many projects.

In spite of the importance of this topic, most psychology graduate programs have little in the way of formal training in these practices. Here, I will provide an overview of a few key topics in this area, but you would be well served to dig more deeply into this area on your own. In particular, Reference BrineyBriney (2015) provides a book-length treatment on data management practices. (Here “data” is used in the broad sense to mean information, which includes but extends beyond participant responses.) Reference HenryHenry (2021a, Reference Henry2021b) provides an overview of many relevant issues as well. Another excellent place to look for help in this area is your university library. Librarians are experts in data management, and libraries often host workshops and give consultations to help researchers improve their practices.

Several practices are part of the array of options available to openly document your research process. Here, I’ll introduce open lab notebooks, open protocols/materials, and open data/analysis code. Reference Klein, Hardwicke, Aust, Breuer, Danielsson, Mohr, IJzerman, Nilsonne, Vanpaemel and FrankKlein et al. (2018) provide a detailed, pragmatic look at these topics, highlighting considerations around what to share, how to share, and when to share.

3.1 Open Lab Notebooks

One way to track your research as it unfolds is to keep a detailed lab notebook. Recently, some researchers have begun to keep open, digital lab notebooks (Reference CampbellCampbell, 2018). Put briefly, open lab notebooks allow outsiders to access the research process in its entirety in real time (Reference Bradley, Lang, Koch, Neylon, Elkins, Lang, Koch and NeylonBradley et al., 2011). Open lab notebooks might include entries for data collected, experiments run, analyses performed, and so on. They can also include accounts of decisions made along the way – for instance, to change an analysis strategy or to modify the participant recruitment protocol. Open lab notebooks are a natural complement to pre-registration insofar as a pre-registration spells out a plan for a project, and the lab notebook documents the execution (or alteration) of that plan. In fact, for some types of research, where the a priori plan is relatively sparse, an open lab notebook can be an especially effective way to transparently document exploration as it unfolds.

On a spectrum from completely open research to completely opaque research, the practice of keeping an open lab notebook marks the far (open) end of the scale. For some projects (or some researchers) the costs of keeping a detailed open lab notebook in terms of time and effort might greatly exceed the scientific benefits for transparency and record keeping. Other practices may achieve similar goals more efficiently, but for some projects, the practice could prove invaluable. To decide whether an open lab notebook is right for you, consider the examples given in Reference CampbellCampbell (2018). You can also see an open notebook in action here: https://osf.io/3n964/ (Reference Koessler, Campbell and KohutKoessler et al., 2019).

3.2 Open Protocols and Open Materials

A paper’s Method section is designed to describe a study protocol – that is, its design, participants, procedure, and materials – in enough detail that an independent researcher could replicate the study. In actuality, many key details of study protocols are omitted from Method sections (Reference ErringtonErrington, 2019). To remedy this information gap, researchers should share full study protocols, along with the research materials themselves, as supplemental files. Protocols can include things like complete scripts for experimental research assistants, video demonstrations of techniques (e.g., a participant interaction or a neurochemical assay), and full copies of study questionnaires. The goal is for another person to be able to execute a study fully without any assistance from the original author.

Research materials that have been created specifically for a particular study – for instance, the actual questions asked of participants or program files for an experimental task – are especially important to share. If existing materials are used, the source where those materials can be accessed should be cited in full. If there are limitations on the availability of materials, which might be the case if materials are proprietary or have restricted access for ethical reasons, those limitations should be disclosed in the manuscript.

3.3 Reproducible Analyses, Open Code, and Open Data

One of the basic features of scientific research products is that they should be independently reproducible. A finding that can only be recreated by one person is a magic trick, not a scientific truism. Here, reproducible means that results can be recreated using the same data originally used to make a claim. By contrast, replicability implies the repetition of a study’s results using different data (e.g., a new sample). Note that also, a finding can be reproducible, or even replicable, but not be a valid or accurate representation of reality (Reference Vazire, Schiavone and BottesiniVazire et al., 2020). Reproducibility can be thought of as a minimally necessary precursor to later validity claims. In psychology, analyses of quantitative data very often form the backbone of our scientific claims. Yet, the reproducibility of data analytic procedures may never be checked, or if they are checked, findings may not be reproducible (Reference Obels, Lakens, Coles, Gottfried and GreenObels et al., 2020; Reference Stodden, Seiler and MaStodden et al., 2018). Even relatively simple errors in reporting threaten the accuracy of the research literature (Reference Nuijten, Hartgerink, Van Assen, Epskamp and WichertsNuijten et al., 2016).

Luckily, these problems are fixable, if we are willing to put in the effort. Specifically, researchers should share the code underlying their analyses and, when legally and ethically permissible, they should share their data. But beyond just sharing the “finished product,” it may be helpful to think about preparing your data and code to share while the project is actually under way (Reference Klein, Hardwicke, Aust, Breuer, Danielsson, Mohr, IJzerman, Nilsonne, Vanpaemel and FrankKlein et al., 2018).

Whenever possible, analyses should be conducted using analysis code – also known as scripting or syntax – rather than by using point-and-click menus in statistical software or doing hand calculations in spreadsheet programs. To further enhance the reproducibility of reported results, you can write your results section using a language called R Markdown. Succinctly, R Markdown combines descriptive text with results (e.g., statistics, counts) drawn directly from analyses. When results are prepared in this way, there is no need to worry about typos or other transcription errors making their way into your paper, because numbers from results are pulled directly from statistical output. Additionally, if there is a change to the data – say, if analyses need to be re-run on a subset of cases – the result text will automatically update with little effort.

Reference Peikert and BrandmaierPeikert and Brandmeier (2019) describe a possible workflow to achieve reproducible results using R Markdown along with a handful of other tools. Reference RouderRouder (2016) details a process for sharing data as it is generated – so-called “born open” data. This method also preserves the integrity of original data. When combined with Peikert and Brandmeier’s technique, the potential for errors to affect results or reporting is greatly diminished.

Regardless of the particular scripting language that you use to analyze your data, the code, along with the data itself, should be well documented to enable use by others, including reviewers and other researchers. You will want to produce a codebook, also known as a data dictionary, to accompany your data and code. Reference Buchanan, Crain, Cunningham, Johnson, Stash, Papadatou-Pastou, Isager, Carlsson and AczelBuchanan et al. (2021) describe the ins and outs of data dictionaries. Reference ArslanArslan (2019) writes about an automated process for codebook generation using R statistical software.

3.4 Version Control

When it comes to tracking research products in progress, a crucial concept is known as version control. A version control system permits contributors to a paper or other product (such as analysis code) to automatically track who made changes to the text and when they made them. Rather than saving many copies of a file in different locations and under different names, there is only one copy of a version-controlled file, but because changes are tracked, it is possible to roll back a file to an earlier version (for instance, if an error is detected). On large collaborative projects, it is vital to be able to work together simultaneously and to be able to return to an earlier version of the work if needed.

Working with version-controlled files decreases the potential for mistakes in research to go undetected. Reference Rouder, Haaf and SnyderRouder et al. (2019) describe practices, including the use of version control, that help to minimize mistakes and improve research quality. Reference Vuorre and CurleyVuorre and Curley (2018) provide specific guidance for using Git, one of the most popular version control systems. An additional benefit of learning to use these systems is their broad applicability in non-academic research settings (e.g., at technology and health companies). Indeed, developing skills in domain general areas like statistics, research design, and programming will broaden the array of opportunities available to you when your training is complete.

3.5 Working Openly Facilitates Teamwork and Collaboration

Keeping an open lab notebook, sharing a complete research protocol, or producing a reproducible analysis script that runs on open data might seem laborious compared to closed research practices, but there are advantages of these practices beyond the scientific benefits of working transparently. Detailed, clear documentation is needed for any collaborative research, and the need might be especially great in large teams. Open science practices can even facilitate massive collaborations, like those managed by the Psychological Science Accelerator (PSA; Reference Moshontz, Campbell, Ebersole, IJzerman, Urry, Forscher and ChartierMoshontz et al., 2018). The PSA is a global network of over 500 laboratories that coordinates large investigations of democratically selected study proposals. It enables even teams with limited resources to study important questions at a large enough scale to yield rich data and precise answers. Open science practices are baked into all parts of the research process, and indeed, such efforts would not be feasible or sustainable without these standard operating procedures.

Participating in a large collaborative project, such as one run by the PSA, is an excellent way to develop your open science skillset. It can be exciting and quite rewarding to work in such a large team, but in so doing, there is also the opportunity to learn from the many other collaborators on the project.

4. Writing It Up: Open Science and Your Manuscript

The most eloquent study with the most interesting findings is scientifically useless until the findings are communicated to the broader research community. Indeed, scientific communication may be the most important part of the research process. Yet skillfully communicating results isn’t about mechanically relaying the outcomes of hypothesis tests. Rather, it’s about writing that leaves the reader with a clear conclusion about the contribution of a project. In addition to being narratively compelling, researchers employing open science practices will also want to transparently and honestly describe the research process. Adept readers may sense a conflict between these two goals – crafting a compelling narrative vs. being transparent and honest – but in reality, both can be achieved.

4.1 Writing Well and Transparently

Reference GernsbacherGernsbacher (2018) provides detailed guidance on preparing a high-quality manuscript (with a clear narrative) while adhering to open science practices. She writes that the best articles are transparent, reproducible, clear, and memorable. To achieve clarity and memorability, authors must attend to good writing practices like writing short sentences and paragraphs and seeking feedback. These techniques are not at odds with transparency and reproducibility, which can be achieved through honest, detailed, and clear documentation of the research process. Even higher levels of detail can be achieved by including supplemental files along with the main manuscript.

One issue, of course, is how to decide which information belongs in the main paper versus the supplemental materials. A guiding principle is to organize your paper to help the reader understand the paper’s contribution while transparently describing what you’ve done and learned. Reference GernsbacherGernsbacher (2018) advises having an organized single file as a supplement to ease the burden on reviewers and readers. A set of well-labeled and organized folders in your OSF project (e.g., Materials, Data, Analysis Code, Manuscript Files) can also work well. Consider including a “readme” file or other descriptive text to help readers understand your file structure.

If a project is pre-registered, it is important that all of the plans (and hypotheses, if applicable) in the study are addressed in the main manuscript. Even results that are not statistically significant deserve discussion in the paper. If planned methods have changed, this is normal and absolutely fine. Simply disclose the change (along with accompanying rationale) in the paper, or better yet, file an addendum to your pre-registration when the change is made before proceeding. Likewise, when analysis plans change, disclose the change in the final paper. If the originally planned analysis strategy and the preferred strategy are both valid techniques, and others might disagree about which strategy is best, present results using both strategies. The details of the comparative analyses can be placed in a supplement, but discuss the analyses in the main text of the paper.

A couple of additional tools to assist with writing your open science manuscript are worth mention. First, Reference Aczel, Szaszi, Sarafoglou, Kekecs, Kucharský, Benjamin, Chambers, Fisher, Gelman, Gernsbacher, Ioannidis, Johnson, Jonas, Kousta, Lilienfeld, Lindsay, Morey, Munafò, Newell and WagenmakersAczel et al. (2020) provide a consensus-based transparency checklist that authors can complete to confirm that they have made all relevant transparency-based disclosures in their papers. The checklist can also be shared (e.g., on OSF) alongside a final manuscript to help guide readers through the disclosures. Second, R Markdown can be used to draft the entire text of your paper, not just the results section. Doing so allows you to render the final paper using a particular typesetting style more easily. More importantly, the full paper will then be reproducible. Rather than work from scratch, you may want to use the papaja package (Reference Aust and BarthAust & Barth, 2020), which provides an R Markdown template. Many researchers also like to use papaja in concert with Zotero (www.zotero.org/), an open-source reference manager.

4.2 Selecting a Journal for Your Research

Beyond questions of a journal’s topical reach and its reputation in the field, different journals have different policies when it comes to open science practices. When selecting a journal, you will want to review that journal’s submission guidelines to ensure that you understand and comply with its requirements. Another place to look for guidance on a journal’s stance on open science practices is editorial statements. These statements usually appear within the journal itself, but if the journal is owned by a society, they may also appear in society publications (e.g., American Psychological Association Monitor, Association for Psychological Science Observer).

Many journals are signatories of the TOP (Transparency and Openness Promotion) Guidelines, which specify three different levels of adoption for eight different transparency standards (Reference Nosek, Alter, Banks, Borsboom, Bowman, Breckler, Buck, Chambers, Chin, Christensen, Contestabile, Dafoe, Eich, Freese, Glennerster, Goroff, Green, Hesse, Humphreys and YarkoniNosek et al., 2015; see also https://topfactor.org). Journals with policies at level 1 require authors to disclose details about their studies in their manuscripts – for instance, whether the data associated with studies are available. At level 2, sharing of study components (e.g., materials, data, or analysis code) is required for publication, with exceptions granted for valid legal and ethical restrictions on sharing. At level 3, the journal or its designee verifies the shared components – for instance, a journal might check whether a study’s results can be reproduced from shared analysis code. Importantly, journals can adopt different levels of transparency for the different standards. For instance, a journal might adopt level 1 (disclose) for pre-registration of analysis plans, but level 3 (verify) for study materials. Again, journal submission guidelines, along with editorial statements, provide guidance as to the levels adopted for each standard.

Some journals also offer badges for adopting transparent practices. At participating journals, authors declare whether they have pre-registered a study or shared materials and/or data, and the journal then marks the resulting paper with up to three badges (pre-registration, open data, open materials) indicating the availability of the shared content.

A final consideration is the pre-printing policies of a journal. Almost certainly, you will want the freedom to share your work on a preprint repository like PsyArXiv (https://psyarxiv.com). Preprint repositories allow authors to share their research ahead of publication, either before submitting the work for peer review at a journal or after the peer-review process is complete. Some repositories deem the latter class of manuscripts “post-prints” to distinguish them from papers that have not yet been published in a journal. Sharing early copies of your work will enable to you to get valuable feedback prior to journal submission. Even if you are not ready to share a pre-publication copy of your work, sharing the final post-print increases access to the work – especially for those without access through a library including researchers in many countries, scholars without university affiliations, and the general public. Manuscripts shared on PsyArXiv are indexed on Google Scholar, increasing their discoverability.

You can check the policies of your target journal at the Sherpa Romeo database (https://v2.sherpa.ac.uk/romeo/). The journals with the most permissive policies allow sharing of the author copy of a paper (i.e., what you send to the journal, not the typeset version) immediately on disciplinary repositories like PsyArXiv. Other journals impose an embargo on sharing of perhaps one or two years. A very small number of journals will not consider manuscripts that have been shared as preprints. It’s best to understand a journal’s policy before choosing to submit there.

Importantly, sharing pre- or post-print copies of your work is free to do, and it greatly increases the reach of your work. Another option (which may even be a requirement depending on your research funder) is to publish your work in a fully open-access journal (called “gold” open access) or in a traditional journal with the option to pay for your article to be made open access (called “hybrid” open access). Gold open-access journals use the fees from articles to cover the costs of publishing, but articles are free to read for everyone without a subscription. Hybrid journals, on the other hand, charge libraries large subscription fees (as they do with traditional journals), and they charge authors who opt to have their articles made open access, effectively doubling the journal’s revenue without incurring additional costs. The fees for hybrid open access are almost never worth it, given that authors can usually make their work accessible for free using preprint repositories.

Fees to publish your work in a gold open-access journal currently vary from around US$1000 on the low end to US$3000 or more on the high end. Typically, a research funder pays these fees, but if not, there may be funds available from your university library or research support office. Some journals offer fee waivers for authors who lack access to grant or university funding for these costs. Part of open science means making the results of research as accessible as possible. Gold open-access journals are one means of achieving this goal, but preprint repositories play a critical role as well.

5. Coda: The Importance of Community

Certainly, there are many tools and techniques to learn when it comes to open science practices. When you are just beginning, you will likely want to take it slow to avoid becoming overwhelmed. Additionally, not every practice described here will be relevant for every project. With time, you will learn to deploy the tools you need to serve a particular project’s goals. Yet, it is also important not to delay beginning to use these practices. Now is the time in your career where you are forming habits that you will carry with you for many years. You want to lay a solid foundation for yourself, and a little effort to learn a new skill or technology now will pay off down the road.

One of the best ways to get started with open science practices is to join a supportive community of other researchers who are also working towards the same goal. Your region or university might have a branch of ReproducibiliTea (https://reproducibilitea.org/), a journal club devoted to discussing and learning about open science practices. If it doesn’t, you could gather a few friends and start one, or you could join one of the region-free online clubs. Twitter is another excellent place to keep up to date on new practices, and it’s also great for developing a sense of community. Another option is to attend the annual meeting of the Society for the Improvement of Psychological Science (SIPS; http://improvingpsych.org). The SIPS meeting features workshops to learn new techniques, alongside active sessions (hackathons and unconferences) where researchers work together to develop new tools designed to improve psychological methods and practices. Interacting with other scholars provides an opportunity to learn from one another, but also provides important social support. Improving your research practices is a career-long endeavor; it is surely more fun not to work alone.

Acknowledgments

Thank you to Julia Bottesini and Sarah Schiavone for their thoughtful feedback. All errors and omissions are my own.

12 Presenting Your Research

Lindsey L. Cohen , Abigail Robbertz , & Sarah Martin
1. Reasons for Presenting Research

There are several pros and cons to evaluate when deciding whether to submit your research to a conference. There is value in sharing your science with others at the conference, such as professors, students, clinicians, teachers, and other professionals who might be able to use your work to advance their own work. As a personal gain, your audience may provide feedback, which may be invaluable to you in your professional development. Presenting research at conferences also allows for the opportunity to meet potential future advisors, employers, collaborators, or colleagues. Conferences are ideal settings for networking and, in fact, many conferences have forums organized for this exact purpose (e.g., job openings listed on a bulletin board and networking luncheons). The downsides to submitting your work to a conference include the time commitment of writing and constructing the presentation, the potential for rejection from the reviewers, the anxiety inherent in formal presentations, and the potential time and expenses of traveling to the meeting. Although we do believe that the benefits of presenting at conferences outweigh the costs, you should carefully consider your own list of pros and cons before embarking on this experience.

2. Presentation Venues

There are many different outlets for presenting research findings, ranging from departmental colloquia to international conferences. The decision of submitting a proposal to one conference over another should be guided by both practical and professional reasoning. In selecting a convention, you might consider the following questions: Is this the audience to whom I wish to disseminate my findings? Are there other professionals that I would like to meet attending this conference? Are the other presentations of interest to me? Are the philosophies of the association consistent with my perspectives and training needs? Can I afford to travel to this location? Will my institution provide funding for the cost of this conference? Will my presentation be ready in time for the conference? Am I interested in visiting the city that is hosting the conference? Do the dates of the conference interfere with personal or professional obligations? Will this conference provide the opportunity to network with colleagues and friends? Is continuing education credit offered? Associated with the COVID-19 pandemic, many meetings are virtual, which provides a lower-cost option, but networking is more challenging and the options of linking a vacation to the conference is no longer an option. Fortunately, there are a range of options and you should be able to find a venue that satisfies most of your professional presenting needs.

3. Types of Presentations

After selecting a conference, you must decide on the type of presentation. In general, presentation categories are similar across venues and include poster and oral presentations (e.g., papers, symposia, panel discussions) and workshops. In general, poster presentations are optimal for disseminating preliminary or pilot findings, whereas well-established findings, cutting-edge research, and conceptual/theoretical issues often are reserved for oral presentations and workshops. A call for abstracts or proposals is often distributed by the institution hosting the conference and announces particular topics of interest for presentations. If you are unsure about whether your research is best suited for a poster or oral presentation or workshop, refer to the call for abstracts or proposals and consult with more experienced colleagues. Keynote and invited addresses are other types of conference proceedings typically delivered by esteemed professionals or experts in the field. It is important to note that not all conferences use the same terminology, especially when comparing conferences across countries. For example, a “workshop” at one conference might be a full-day interactive training session and at another conference it might indicate a briefer oral presentation. The following sections are organized in accord with common formats found in many conferences.

The most common types of conference presentations, poster presentations, symposia, panel discussions, and workshops, deserve further discussion. Typically, these scientific presentations follow a consistent format, which is similar to the layout of a research manuscript. For example, first you might introduce the topic, highlight related prior work, outline the purpose and hypotheses of the study, review the methodology, and, lastly, present and discuss salient results and implications (see Reference Drotar and DrotarDrotar, 2000).

3.1 Poster Presentations

Poster presentations are the most common medium through which researchers disseminate findings. In this format, researchers summarize their primary aims, results, and conclusions in an easily digestible manner on a poster board. Poster sessions vary in duration, often ranging from 1 to 2 hours. Authors typically are present with their posters for the duration of the session to discuss their work with interested colleagues. Poster presentations are relatively less formal and more personal than other presentation formats, with the discussion of projects often assuming a conversational quality. That said, it is important to be prepared to answer challenging questions about the work. Typically, many posters within a particular theme (e.g., health psychology) are displayed in a large room so that audiences might walk around the room and talk one-to-one with the authors. Thus, poster sessions are particularly well-suited for facilitating networking and meeting with researchers working in similar areas.

Pragmatically, conference reviewers accept more posters for presentations than symposia, panel discussion, and workshops, and thus, the acceptance criteria are typically more lenient. Relatedly, researchers might choose posters to present findings from small projects or preliminary or pilot results studies. Symposia, panel discussions, and workshops allow for the formal presentation of more ground-breaking findings or of multiple studies. Poster presentations are an opportune time for students to become familiarized with disseminating findings and mingling with other researchers in the field.

3.2 Research Symposia

Symposia involve the aggregation of several individuals who present on a common topic. Depending on time constraints, 4–6 papers typically are featured, each lasting roughly 20 minutes, and often representing different viewpoints or facets of a broader topic. For example, a symposium on the etiology of anxiety disorders might be comprised of four separate papers representing the role of familial influences, biological risk factors, peer relationships, and emotional conditioning on the development of maladaptive anxiety. As a presenter, you might discuss one project or the findings from a few studies. Like a master of ceremonies, the symposia Chair typically organizes the entire symposia by selecting presenters, guiding the topics and style of presentation, and introducing the topic and presenters at the beginning of the symposium. In addition to these duties, the Chair often will present a body of work or a few studies at the beginning of the symposium. In addition to the Chair and presenters, a Discussant can be part of a symposium. The Discussant concludes the symposium by summarizing key findings from each paper, integrating the studies, and making more broad-based conclusions and directions for future research. Although a Discussant is privy to the presenters’ papers prior to the symposium in order to prepare the summary comments, he or she will often take notes during the presenters’ talks to augment any prepared commentary. Presenters are often researchers of varying levels of experience, while Chairs and Discussants are usually senior investigators. The formal presentation is often followed by a period for audience inquiry and discussion.

3.3 Panel Discussions

Panel discussions are similar to research symposia in that several professionals come together to discuss a common topic. Panel discussions, however, generally tend to be less formal and structured and more interactive and animated than symposia. For example, Discussants can address each other and interject comments throughout the discussion. Similar to symposia, these presentations involve the discussion of one or more important topics in the field by informed Discussants. As with symposia presentations, the Chair typically organizes these semi-formal discussions by contacting potential speakers and communicating the discussion topic and their respective roles.

3.4 Workshops

Conference workshops typically are often longer (e.g., lasting at least three hours) and provide more in-depth, specialized training than symposia and panel discussions. It is not uncommon for workshop presenters to adopt a format similar to a structured seminar, in which mini-curricula are followed. Due to the length and specialized training involved, most workshop presenters enhance their presentations by incorporating interactive (e.g., role-plays) and multimedia (e.g., video clips) components. Workshops often are organized such that the information is geared for beginner, intermediate, or advanced professionals. Often conferences are organized such that participation in workshops must be reserved in advance and there might be additional fees associated with attendance. The cost should be balanced with the opportunity of obtaining unique training in a specialized area. These are most often presented by seasoned professionals; however, more junior presenters with specialized skills/knowledge might conduct a workshop.

4. The Application Process

After selecting a venue and deciding on a presentation type, the next step is to submit an application to the conference you wish to attend. The application process typically involves submitting a brief abstract (e.g., 200–300 words) describing the primary aims, methods, results, and conclusions of your study. For symposia and other oral presentations, the selection committee might request an outline of your talk, curriculum vitae from all presenters, and a time schedule or presentation agenda. Some conferences might also request information regarding the educational objectives and goals of your presentation. One essential rule is to closely adhere to the directions for submissions to the conference. For example, if there is a word limit for a poster abstract submission, make sure that you do not exceed the number of words. Whereas some reviewers might not notice or mind, others might view it is as unprofessional and possibly disrespectful and an easy decision rule to use to reject a submission.

Although the application process itself is straightforward, there are differences in opinion regarding whether and when it is advisable to submit your research. A commonly asked question is whether a poster or paper can be presented twice. Many would agree that it is acceptable to present the same data twice if the conferences draw different audiences (e.g., regional versus national conferences). Another issue to consider is when, or at what stage, a project should be submitted for presentation. Submitting research prior to analyzing your data can be risky. It would be unfortunate, for example, to submit prematurely, such as during the data collection phase, only to find that your results are not ready in time for the conference. Although some might be willing to take this risk, remember that it is worse to present low-quality work than not to present at all.

5. Preparing and Conducting Presentations
5.1 Choosing an Appropriate Outfit

Dress codes for conference proceedings typically are not formally stated; however, data suggest that perceptions of graduate student professionalism and competence are influenced by dress (e.g., Reference Gorham, Cohen and MorrisGorham et al., 1999). Although the appropriateness of certain attire is likely to vary, a good rule of thumb is to err on the side of professionalism. You also might consider the dress of your audience, and dress in an equivalent or more formal fashion. Although there will be people at conferences wearing unique styles of dress, students and professionals still early in their careers are best advised to dress professionally. It can be helpful to ask people who have already attended the conference what would be appropriate to wear. In addition to selecting your outfit, there are several preparatory steps you can take to help ensure a successful presentation.

5.2 Preparing for Poster Presentations
5.2.1 The Basics

The first step in preparing a poster is to be cognizant of the specific requirements put forth by the selected venue. For example, very specific guidelines often are provided, detailing the amount of board space available for each presenter (typically a 4-foot by 6-foot standing board is available). To ensure the poster will fit within the allotted space, it may be helpful to physically lay it out prior to the conference. This also may help to reduce future distress, given that back-to-back poster sessions are the norm; knowing how to arrange the poster in advance obviates the need to do so hurriedly in the few minutes between sessions. If you are using PowerPoint to design your poster, you can adjust the size of your layout to match the conference requirements.

5.2.2 Tips For Poster Construction

The overriding goal for poster presentations is to summarize your study using an easily digestible, reader-friendly format (Reference GrechGrech, 2018b). As you will discover from viewing other posters, there are many different styles to do this. If you have the resources, professional printers can create large glossy posters that are well received. However, cutting large construction paper to use as a mat for laser-printed poster pages can also appear quite professional. Some companies will even print the poster on a fabric material, so it can be folded up and stored easily in a suitcase for travel to and from the conference. Regardless of the framing, it is advisable to use consistent formatting (e.g., same style and font size throughout the poster), large font sizes (e.g., at least 20-point font for text and 40-point font for headings), and alignment of graphics and text (Reference Zerwic, Grandfield, Kavanaugh, Berger, Graham and MershonZerwic et al., 2010). Another suggestion for enhancing readability and visual appeal is to use bullets, figures, and tables to illustrate important findings. Generally speaking, brief phrases (as opposed to wordy paragraphs) should be used to summarize pertinent points. It has been suggested to limit horizontal lines to 10 or fewer words and avoid using more than four colors (Reference Zerwic, Grandfield, Kavanaugh, Berger, Graham and MershonZerwic et al., 2010). In short, it is important to keep your presentation succinct and avoid overcrowding on pages. Although there are a variety of fonts available and poster boards come in all colors imaginable, it is best to keep the poster professional. In other words, Courier, Arial, or Times New Roman are probably the best fonts to use because they are easy to read and they will not distract or detract from the central message of the poster (i.e., your research). In addition, dark font (e.g., blue, black) on a light background (e.g., yellow, white) is easier to read in brightly lit rooms, which is the norm for poster sessions. Be mindful of appropriately acknowledging any funding agencies or other organizations (e.g., universities) on the poster or in the oral presentation slides. Recently, the format of posters has shifted. The “better poster” format (https://youtu.be/1RwJbhkCA58) includes the main research finding in large text in the center of the poster with extra details and figures on the sides (Figure 12.1). This allows conference attendees to quickly assess whether the research is of interest to them and if they would like more detailed information.

Figure 12.1 Poster formats.

5.2.3 What To Bring

When preparing for a poster presentation, consider which materials might be either necessary or potentially useful to bring. For instance, it might be wise to bring tacks (Reference GrechGrech, 2018b). It also is advisable to create handouts summarizing the primary aims and findings and to distribute these to interested colleagues. The number of copies one provides often depends on the size of the conference and the number of individuals attending a particular poster session. We have found that for larger conferences, 20 handouts are a good minimum. In general, handouts are in high demand and supplies are quickly depleted, in which case, you should be equipped with a notepad to obtain the names and addresses of individuals interested in receiving the handout via mail or e-mail. With the “better poster” format, researchers often include a QR code on their poster. The QR code can link them to a copy of the poster, contact information of the researchers, or other relevant electronic documents.

5.2.4 Critically Evaluate Other Posters

We also recommend critically evaluating other posters at conferences and posters previously used by colleagues. You will notice great variability in poster style and formatting, with some researchers using glossy posters with colored photographs and others using plain white paper and black text. Make mental notes regarding the effective and ineffective presentation of information. What attracted you to certain posters? Which colors stood out and were the most readable? Such informal evaluations likely will be invaluable when making decisions on aspects such as poster formatting, colors, font, and style.

5.2.5 Prepare Your Presentation

Poster session attendees will often approach your poster and ask you to summarize your study, so it is wise to prepare a brief overview of your study (e.g., 2 minutes). In addition, practice describing any figures or graphs displayed on your poster. Finally, attendees will often ask questions about your study (e.g., “What are the clinical implications?”, “What are some limitations to your study?”, “What do you recommend for future studies?”), so it may be helpful to have colleagues review your poster and ask questions. Table 12.1 provides some suggestions as to how to handle difficult questions.

Table 12.1 Handling difficult questions

Type of questionSuggestions
Questions without readily available answers
  • Admit your unfamiliarity with the question

  • Ask the questioner if he/she has thoughts as to the answer

  • Hazard a guess, but back it up with literature and acknowledge that it is a guess

  • Pose an answer to a related question

  • Simply state that the questioner raised an important point and move on to other questions

Irrelevant questions (e.g., “Where were you born?”)
  • Avoid digressing from the topic

  • Offer to meet with the questioner following the presentation

“Dumb” questions (e.g., “What does ‘hypothesis’ mean?”)
  • Offer a brief explanation and move on

  • Do not insult the questioner

Politically sensitive questions (e.g., being asked to comment on opposing theoretical viewpoint)
  • Stick to empirical data and avoid personal attacks

Multiple questions asked simultaneously
  • Choose either the most pertinent question or the question you would like to answer first (e.g., “I’ll start with your last question.”)

  • Ask the questioner to repeat the questions

Offensively worded questions
  • Avoid becoming defensive

  • Avoid repeating offensive language

Vague questions
  • Ask for clarification from the questioner

  • Restate the question in more specific terms

5.3 Conducting Poster Presentations

In general, presenting a poster is straightforward – tack the poster to the board at the beginning of the session, stand next to the poster and discuss the details of the project with interested viewers, and remove the poster at the end of the session. However, we have found that a surprisingly high number of presenters do not adequately fulfill these tasks. Arriving to the poster session at least five minutes early will allow you to find your allocated space, unpack your poster, and decide where to mount it on the board. When posters consist of multiple frames, it might be easiest to lie out the boards on the floor prior to beginning to tack it up on the board.

During the poster session, remember this fundamental rule – be present. It is permissible to browse other posters in the same session; however, always arrange for a co-author or another colleague knowledgeable about the study to man the poster. Another guideline is to be available to answer questions and discuss the project with interested parties. In other words, refrain from reading, chatting with friends, or engaging in other activities that interfere with being available to discuss the study. At the conclusion of the poster session, it is important to quickly remove your poster so subsequent presenters have ample time to set up their posters. Suggestions for preparing and presenting posters are summarized in Table 12.2.

Table 12.2 Suggestions for poster presentations

Constructing your poster
  • Follow conference guidelines

  • Summarize study using a professional and reader-friendly format (e.g., short phrases, large font size, plain font)

  • Use consistent formatting throughout poster (e.g., same style and font type)

  • Use bullets, graphs, tables, and other visual aides

  • Keep succinct and avoid overcrowding on pages

Deciding what to bring
  • Tacks to mount poster

  • Handouts summarizing primary aims and findings

  • Notepad and pen for addresses

Evaluating other presentations
  • Observe variability in poster formats

  • Note effective and ineffective presentation styles

  • Incorporate effective aspects into your next presentation

Presenting your poster
  • Arrive at least five minutes early to set up

  • Be present or arrange for co-author(s) to be available to answer questions at the designated time

  • Avoid engaging in interfering activities (e.g., reading, talking to friends)

5.4 Preparing for Oral Presentations
5.4.1 The Basics

Similar to poster sessions, it is important to be familiar with and adhere to program requirements when preparing for oral presentations. For symposia, this might include sending an outline of your talk to the Chair and Discussant several weeks in advance and staying within a specified time limit when giving your talk. Although the Chair often will ensure that the talks adhere to the theme and do not excessively overlap, the presenter also can do this via active communication with the Chair, Discussant, and other presenters.

5.4.2 What To Bring

As with poster presentations, it is useful to anticipate and remember to bring necessary and potentially useful materials. For instance, individuals using PowerPoint should bring their slides in paper form in case of equipment failure. Equipment, such as microphones, often are available upon request; it is the presenter’s responsibility, however, to reserve equipment in advance.

5.4.3 Critically Evaluate Other Presenters

By carefully observing other presenters, you might learn valuable skills of how to enhance your presentations. Examine the format of the presentation, the level of detail provided, and the types and quality of audiovisual stimuli. Also try to note the vocal quality (e.g., intonation, pitch, pace, use of filler terms such as “um”), facial characteristics (e.g., smiling, eye contact with audience members), body movements (e.g., pacing, hand gestures), and other subtle aspects that can help or hinder presentations.

5.4.4 Practice, Practice, Practice

In terms of presentation delivery, repeated practice is essential for effective preparation (see Reference Williams, Pequegnat and StoverWilliams, 1995). For many people, students and seasoned professionals alike, public speaking can elicit significant levels of distress. Given extensive data supporting the beneficial effects of exposure to feared stimuli (see Reference WolpeWolpe, 1977), repeated rehearsal is bound to produce positive outcomes, including increased comfort, increased familiarity with content, and decreased levels of anxiety. Additionally, practicing will help presenters hone their presentation skills and develop a more effective presentational style. We recommend practicing in front of an “audience” and soliciting feedback regarding both content and presentational style. Solicit feedback on every aspect of your presentation from the way you stand to the content of your talk. It might be helpful to rehearse in front of informed individuals (e.g., mentors, graduate students, research groups) who ask relevant and challenging questions and subsequently provide constructive feedback (Reference GrechGrech, 2018a; Reference Wellstead, Whitehurst, Gundogan and AghaWellstead et al., 2017). Based on this feedback, determine which suggestions should be incorporated and modify your presentation accordingly. As a general rule, practice and hone your presentation to the point that you are prepared to present without any crutches (e.g., notes, overheads, slides).

5.4.5 Be Familiar and Anticipate

As much as possible, try to familiarize yourself with the audience both before and during the actual presentation (Reference Baum and BoughtonBaum & Boughton, 2016; Reference RegulaRegula, 2020). By having background information, you can better tailor your talk to meet the professional levels and needs of those in attendance. It may be particularly helpful to have some knowledge regarding the educational background and general attitudes and interests of the audience (e.g., is the audience comprised of laymen and/or professionals in the field? What are the listeners’ general attitudes toward the topic and towards you as the speaker? Is the audience more interested with practical applications or with design and scientific rigor?). Are you critiquing previous work from authors that may be in the audience? By conducting an informal “audience analysis,” you will be more equipped to adapt your talk to meet the particular needs and interests of the audience. It is important to have a clear message that you want the audience to take with them after the presentation (Reference RegulaRegula, 2020). Most conference-goers will be seeing many presentations during the conference, so having a clear take-home message can make it easier for the audience to remember the key points.

Similarly, it might be helpful to have some knowledge about key logistical issues, such as room size and availability of equipment (Reference Baum and BoughtonBaum & Boughton, 2016). For example, will the presentation take place in a large, auditorium-like room or in a more intimate setting with the chairs arranged in a semi-circle? If the former, will a microphone be available? Is there a podium at the front of the room that might influence where you will stand? Given the dimensions of the room, where should the slide projector be positioned? Although it may be impossible to answers all such questions, it is a good idea to have a general sense of where the presentation will take place and who will be attending. It may also help to rearrange the seating, so that latecomers do not pose as a distraction (Reference Baum and BoughtonBaum & Boughton, 2016). Suggestions for preparing and conducting oral presentations are summarized in Table 12.3.

Table 12.3 Oral presentations

Preparing for your oral presentation
  • Adhere to program requirements (e.g., stay within time limit)

  • Check on equipment availability

  • Reserve necessary equipment (e.g., laptop for PowerPoint presentation, adapters)

  • Bring necessary materials (e.g., flash drive, copy of notes or slides)

  • Be prepared to present without any materials in case of equipment failure

  • Be prepared to shorten your talk if previous speakers exceed their allotted time

Familiarizing yourself with the environment
  • Conduct informal “audience analysis” – familiarize yourself with audience before and during presentation

  • Tailor your talk to meet the professional levels and needs of the audience

  • Anticipate room size (e.g., will talk be held in a large auditorium or in a more intimate setting?)

Giving your talk
  • Dress professionally

  • Maintain good posture

  • Avoid distracting mannerisms (e.g., pacing and filler words such as “um”)

  • Avoid standing in one place or behind a podium

  • Maintain eye contact with your audience

  • Be vocally energetic and enthusiastic

Enhancing your presentation
  • Practice, practice, practice!

  • Solicit feedback from colleagues and make appropriate modifications

  • Observe other presenters; imitate effective presentational styles and incorporate effective modes of delivery

  • Use enhancements and audio/visual aids such as video clips, PowerPoint slides, cartoons or comics

  • Use humor and illustrative examples (e.g., metaphors, real-life stories, cartoons, comic strips, jokes)

  • Avoid information overload; instead, clearly deliver 2–4 “take-home messages”

If the conference is virtual, it is critical to become familiar with the software or online platform well in advance of a synchronous event. For example, practicing with the camera and microphone and watching recordings of yourself will allow you to fine-tune the presentation and audio quality. Whether synchronous or asynchronous, there are a number of tips for optimizing video presentations, including how to position the camera, how to light the presenter, behaviors to include or avoid, and what to consider in terms of the background. Given the variability and subjectivity related to video presentation suggestions, we encourage the readers to research this extensive topic to personalize and optimize their virtual presentations.

5.5 Conducting Oral Presentations
5.5.1 Using Audiovisual Enhancements

One strategy for enhancing oral presentations is to use audio/visual stimuli, such as slides or props (e.g., Reference GrechGrech, 2019; Reference HoffHoff, 1988; Reference WilderWilder, 1994; see Table 12.4). When using visual enhancements, keep it simple, and clearly highlight important points using readable and consistent typeface (Reference Blome, Sondermann and AugustinBlome et al., 2017; Reference GrechGrech, 2018a). Information should be easily assimilated and reader-friendly, which generally means limiting text to a few phrases rather than complete sentences or paragraphs and using sufficiently large font sizes (e.g., 36- to 48-point font for titles and 24- to 36-point font for text). In addition, it is a good idea to keep titles to one line and bullets to no more than two lines of information. Additionally, color schemes should be relatively subdued and “professional” in appearance. For slide presentations, a dark background and light text might be easier to read. Utilize fonts without serifs (e.g., Arial) as opposed to fonts with serifs (e.g., Times New Roman; Reference Lefor and MaenoLefor & Maeno, 2016). Some conferences prefer a particular slide size, so it can be important to check with their requirements before designing the presentation. Additionally, depending on the room set up, it can be hard for the audience to see the bottom of slides, so it can be helpful to avoid placing text near the bottom of the slide. See Figure 12.2 for an example of a poor and good slide for an oral presentation.

Table 12.4 Using audiovisual enhancements

Examples of audiovisual aids
  • Slides

  • Video clips

  • Cartoons and comic strips

Tips for using slides
  • Test equipment in advance

  • Keep it simple; use to clarify and enhance

  • Avoid going overboard (too much might detract from presentation)

  • Use reader-friendly format (e.g., short phrases, avoid overcrowding)

  • Use bullet points rather than sentences

  • Remember One × Six × Six: only ONE idea per visual; less than SIX bullets per visual; less than SIX words per bullet point

  • Highlight important points using readable, consistent typeface

  • Use professional color schemes (e.g., light background, dark text for overheads and dark background, light text for slides)

  • Speak to audience, not to visual aids

  • Stand to the side of your screen to avoid blocking audience’s view

  • Pause as you change slides; practice for smooth transitions

  • Be prepared to present without your overheads/slides

Tips for using videos
  • Test equipment in advance

  • Pre-set volume levels and cue video in advance

  • Introduce video clip and announce its length

  • Dim the lights before playing

  • Give a concluding statement following the video

  • Use video clips to illustrate and enhance presentations

Figure 12.2 Sample poor and good slides for an oral presentation.

Using audiovisual aids, such as video clips, also can contribute substantially to the overall quality and liveliness of a presentation. When incorporating video clips, pre-set volume levels and cue up the video in advance. We also recommend announcing the length of the video, dimming lights, and giving a concluding statement following the video.

Multimedia equipment and audiovisual aids have the potential to liven up even the most uninspiring presentations; however, caution against becoming overly dependent on any medium. Rather, be fully prepared to deliver a high-quality presentation without the use of enhancements. It also might be wise to prepare a solid “back-up plan” in case your original mode of presentation must be abandoned due to equipment failure or some other unforeseen circumstance. Back-up overheads, for example, might rescue a presenter who learns of equipment failures minutes before presenting.

When using slides, it is important to avoid “going overboard” with information (Reference Blome, Sondermann and AugustinBlome et al., 2017; Reference RegulaRegula, 2020). Many of us will present research with which we are intimately familiar and invested. With projects that are particularly near and dear (e.g., theses and dissertations), it may be tempting to tell the audience as much as possible. It is not necessary, for example, to describe the intricacies of the data collection procedure and present every pre-planned and post-hoc analysis, along with a multitude of significant and non-significant F-values and coefficients. Such information overload might bore audience members, who are unlikely to care about or remember so many fine-grained details. Instead of committing this common presentation blunder, present key findings in a bulleted, easy-to-read format rather than sentences. To avoid overcrowding of slides and overheads, you might remember the One × Six × Six rule of thumb: only ONE idea per visual, less than SIX bullets per visual, and less than SIX words per bullet (Reference RegulaRegula, 2020; see Figure 12.2).

The length of your oral presentation will vary depending on time restrictions, but there are some general guidelines for how to structure your presentation. Reference Zerwic, Grandfield, Kavanaugh, Berger, Graham and MershonZerwic et al. (2010) proposed a possible structure for research presentations, which should include a title, acknowledgments, background, specific aims, methods, results, conclusions, and future directions sections. Zerwic et al. also recommended how many slides should be allocated to each section with your title, acknowledgments, background, specific aims, conclusions, and future directions sections each taking up one slide with the majority of your slides focusing on the methods and results sections.

In short, remember and hold fast to this basic dictum: Audio/visual aids should be used to clarify and enhance (Reference CohenCohen, 1990; Reference GrechGrech, 2018a; Reference WilderWilder, 1994). Aids that detract, confuse, or bore one’s audience should not be used (soliciting feedback from colleagues and peers will assist in this selection process; Reference RegulaRegula, 2020). Overly colorful and ornate visuals or excessive slide animation, for example, might detract and distract from the content of the presentation (Reference Lefor and MaenoLefor & Maeno, 2016). Likewise, visual aids containing superfluous text might encourage audience members to read your slides rather than attend to your presentation. Keeping visuals simple also might prevent another presentation faux pas: reading verbatim from slides.

5.5.2 Using Humor and Examples

The effective use of humor might help “break the ice,” putting you and your audience at ease. There are many ways in which humor can be incorporated into presentations, such as through the use of stories, rich examples, jokes, and cartoons or comic strips. As with other aids, humor should be used in moderation and primarily to enhance a presentation (Reference CollinsCollins, 2004). When using humor, it is important to be natural and brief and to use non-offensive humor related to the subject matter.

Another strategy for spicing up presentations is through the use of stories and examples to illustrate relevant and important points (Reference Bekker and ClarkBekker & Clark, 2018). This can be accomplished in many ways, such as by providing practical and real-life examples or by painting a mental picture for the audience using colorful language (e.g., metaphors, analogies). Metaphorical language, for instance, might facilitate learning (Reference SkinnerSkinner, 1953) and help audience members to remember pertinent information. Similarly, amusing stories and anecdotes can be used to engage the audience and decrease the “impersonal feel” of more formal presentations. Regardless of whether or how humor is used, remember to do what “works” and feels right. Trying too hard to be amusing may come across as contrived and stilted, thus producing the opposite of the intended effect.

5.5.3 Attending To Other Speakers

When presenting research in a group forum (e.g., symposia), it may be beneficial to attend to other speakers, particularly those presenting before you. Being familiar with the content of preceding talks will help to reduce the amount of overlap and repetition between presentations (although some overlap and repetition might be desirable). You might, for example, describe the similarities and differences across research projects and explain how the current topic and findings relate to earlier presentations. The audience probably will appreciate such integration efforts and have a better understanding of the general topic area.

5.5.4 Answering Questions

Question and answer sessions are commonplace at conferences and provide excellent opportunities for clarifying ambiguous points and interacting with the audience. When addressing inquiries, it is crucial to maintain a professional, non-defensive demeanor (Reference Wellstead, Whitehurst, Gundogan and AghaWellstead et al., 2017). Treat every question as legitimate and well-intentioned, even if it comes across as an objection or insult. As a general rule, in large auditoriums it is good to repeat the question so that everyone in the room hears it. If a question is unclear or extremely complicated, it may be wise to pause and organize your thoughts before answering. If necessary, request clarification or ask the questioner to repeat or rephrase the question. It also may be helpful to anticipate and prepare for high-probability questions (Reference WilderWilder, 1994).

There are several types of difficult questions that can be anticipated, and it is important to know how to handle these situations (Table 12.1). Also, we recommend preparing for a non-responsive audience. If audience members do not initiate questions, some tactics for preventing long, uncomfortable silences are to pose commonly asked questions, reference earlier comments, or take an informal survey (e.g., “Please raise your hand if you work clinically with this population.”). Even if many questions are generated and lead to stimulating discussions, it is important to adhere to predetermined time limits. End on time and with a strong concluding statement.

Above all, avoid becoming defensive and critical, particularly when answering challenging questions. Irrespective of question quality or questioner intent, avoid making patronizing remarks or answering in a way that makes the questioner feel foolish or incompetent. Try to avoid falling into an exclusive dialogue with one person, which might cause other members of the audience to feel excluded or bored. If possible, offer to meet with the questioner and address their questions and concerns at the end of the talk. Another suggestion is to avoid engaging in mini-lectures by showcasing accumulated knowledge and expertise in a particular area. Instead, only provide information that is directly relevant to the specific question posed by the audience (Reference WilderWilder, 1994).

6. Conclusion

There are great benefits to presenting research, both to the presenter and the audience. Before presenting, however, you should consider carefully a number of preliminary issues. For instance, you must decide whether your study is worthy of presentation, where to present it, and what type of presentation to conduct. Once these decisions are made, prepare by practicing your presentation, examining other presentations, and consulting with colleagues. Sufficient preparation should enhance the quality of your presentation and help decrease performance anxiety. We are confident that you will find that a well-executed presentation will prove to be a rewarding and valuable experience for you and your audience.

13 Publishing Your Research

Alan E. Kazdin

A key characteristic of science is the accumulation of knowledge. This accumulation depends not only on the completion of research but also on preparation of reports that disseminate the results. Consequently, publication of research is an essential part of science. Publication can serve other goals as well. Preparing a manuscript for publication helps the investigator to consider the current study in a broader context and chart a course for a series of studies. In addition, many professional and career goals are served by publishing one’s research. Publication of one’s research signals a level of competence and mastery that includes developing an idea, designing, executing and completing the study, analyzing the results, preparing a written report, submitting it for publication, and traversing the peer-review process. This chapter focuses on publishing one’s research. The topics include preparing a manuscript, selecting a publication outlet, submitting the manuscript for review, and revising the manuscript as needed for publication.

There are many outlets to communicate the results of one’s research. Prominent among these are presentations at professional meetings, chapters in edited books, full-length books, and professional journals. Journal publication, the focus of this chapter, holds special status because it is the primary outlet for original research. In terms of one’s career, journal publication also plays a special role primarily because articles accepted for publication usually have undergone peer review. Acceptance and publication attest to the views of one’s peers that there is merit in the work. For any given article, only a few peers (one editor, two to three reviewers) may actually see the manuscript. Multiple publications add to this and after a few publications one can assume there is a building consensus about one’s work, i.e., others view the contributions as important and worthy of publication.

1. Preparing a Manuscript for Publication
1.1 Writing the Article

A central goal of scientific writing is to convey what was actually done so that the methods and procedures can be replicated. Concrete, specific, operational, objective, and precise are some of the characteristics that describe the writing style. The effort to describe research in concrete and specific ways is critically important. However, the task of the author goes well beyond description.

Preparation of the report for publication involves three interrelated tasks that I refer to as description, explanation, and contextualization. Failure to appreciate or to accomplish these tasks serves as a main source of frustration for authors, as their papers traverse the process of manuscript review toward journal publication. Description is the most straightforward task and includes providing details of the study. Even though this is an obvious requirement of the report, basic details often are omitted in published articles (e.g., sex, socioeconomic status, and race of the participants; means and standard deviations) (e.g., Reference Case and SmithCase & Smith, 2000; Reference Gerber, Arceneaux, Boudreau, Dowling, Hillygus, Palfrey, Biggers and HendryGerber et al., 2014; Reference Tate, Perdices, Rosenkoetter, McDonald, Togher, Shadish, Horner, Kratochwill, Barlow, Kazdin, Sampson, Shamseer and SampsonTate et al., 2016). Omission of basic details can hamper scientific progress. If a later study fails to replicate the findings, it could be because the sample is very different along some dimension or characteristic. Yet, we cannot surmise that without knowing at least basic details of the sample in both studies. If a study does repeat the findings, that is important, but is the new finding an extension to a new type of sample? Again, we need basic information in the studies to allow such comparisons.

Explanation is more demanding insofar as this refers to presenting the rationale of several facets of the study. The justification, decision-making process, and the connections between the decisions and the goals of the study move well beyond description. Here the reader of the manuscript has access to the author decision points. There are numerous decision points pertaining to such matters as selecting the sample, choosing among many options of how to test the idea, selecting the measures, and including various control and comparison groups. The author is obliged to explain why the specific options elected are well suited to the hypotheses or the goals of the study. There is a persuasion feature that operates here. The author of the manuscript is persuaded that the decisions are reasonable ways to address the overriding research question. Now the author must convey that to persuade the reader. In other words, explanation conveys why the procedures, measures, and so on were selected, but that explanation ought to be cogent, persuasive, and above all explicit. We do not want the reader to think, “This is an important research question, but why study it that way?” And in many cases, the related prior question of the same ilk emerges, why do we even need this study or why is study important? For the many decision points beginning with selection of the research question, these are very reasonable questions that the author ought to anticipate and pre-empt.

Finally, contextualization moves one step further away from description and addresses how the study fits in the context of other studies and in the knowledge base more generally. This latter facet of the article preparation reflects such lofty notions as scholarship and perspective, because the author places the descriptive and explanatory material into a broader context. Essentially, the author is making the case for the study based on the knowledge base. Relatively vacuous claims (e.g., this is the first study of this or the first study to include this or that control condition or measure) are rarely a strong basis for the study and often means or is interpreted as meaning that the author could not come up with something better. Without context, any “first” is not very important by itself. Indeed, it is easy to be first for a topic that is not very important and has been purposely neglected. We need a more compelling rationale.

For example, if this study is done on why people commit suicide we need the context of why this specific study ought to be done and where in the puzzle of understanding this piece fits. Perhaps prior research omitted some critical control procedure; perhaps there is a special group that has a novel characteristic that reduces (or increases) the likelihood of suicide that would inform the field in unique ways; or perhaps some new twist on a theory or intervention will have clear implications for reducing suicide attempts. These and other such comments convey three points that are wise to address: (1) there is a gap in knowledge, (2) that gap is important, and (3) that gap will be filled in whole or in part by this study.

1.2 General Comments

The three components I identified vary in difficulty. When individuals write their first project for publication, they focus heavily on the descriptive part to make sure all the material and sections are included. And this part is fundamental. Explanation and contextualization are much more difficult. Explanation requires having considered options and conveying to the reader why the one selection was a good choice. Yet one’s first study is often with or from an advisor who has made these decisions and the bases of these decisions might be buried in one of the advisor’s other articles but otherwise is tacit. As authors we need to be prepared for other scientists looking at our paper and doing their job by asking, “Why on earth did we [use: that population, measure, control condition, means of data evaluation, and so on]”. These are not only legitimate questions but are central to science.

Contextualization is even more difficult. Contextualization benefits from experience, scholarship, time, and knowledge of as many related areas of work as one can bring to bear. How is the study connected to the literature or topic, how does it relate to theory, to other disciplines, to a critical problem we ought to care about or that is now facing society? The puzzle analogy might help. A given study is one puzzle piece and merely showing that piece to someone is not inherently interesting. It may be inherently boring. Yet, the piece becomes more interesting as all the other pieces are shown (e.g., the outside box with a full photo of the puzzle) and even more interesting, fascinating actually, if one can paint a verbal picture of the whole puzzle and show how one or two pieces are needed and this study is that part! Explanation gives the rationales for decisions; contextualization determines whether the study is compelling or not. Authors often complain that the reviewers did not understand, “get it,” appreciate the importance of their study. The authors are usually completely right, but guess whose responsibility that is?

The extent to which description, explanation, and contextualization are accomplished increases the likelihood that the report will be viewed as a publishable article and facilitates integration of the report into the knowledge base. Guidelines are provided later in the chapter to convey these tasks more concretely in the preparation and evaluation of research reports. The guidelines focus on the logic of the study, the interrelations of the different sections, the rationale for specific procedures and analyses, the strengths and limitations, and where the study fits in the knowledge base. Consider main sections of the manuscript that are prepared for journal publication and how these components can be addressed.Footnote 1

2. Sections of an Article
2.1 Title

The title of an article includes the key variables, focus, and population with an economy of words. The special features of the study are included to convey the focus immediately to potential readers. It is critical here to be direct, clear, and concise (e.g., “Memory loss and gains associated with aging” or “Predictors of drug use and abuse among adolescents”). These examples are especially concise. Ordinarily an author is encouraged to fit the title within 10–12 words. The words ought to be selected carefully. Titles occasionally are used to index articles in large databases. Words that are not needed or that say little (e.g., “preliminary findings,” “implications,” “new findings”) might be more judiciously replaced by substantive or content words (e.g., among preschool children, the elderly; consequences for sleep and stress) that permit the article to be indexed more broadly than it otherwise would have been.

Occasionally, comments about the method are included in the title or more commonly in the subtitle. Terms like “a pilot study” or “preliminary report” may have many different meanings, such as the fact that this is an initial or interim report of a larger research program. These words could also be gently preparing readers for some methodological surprises and even tell us not to expect too much from the design. These qualifying terms might be accurate, but they implicitly apologize or ask for mercy as well. Better to give a strong title and in the write up give the explanation (decision making) and contexts to convey why this study was done, where it fits in the scheme of this literature, and why this was important. No apologies needed; just let the reader know your thinking on the matter. Although I am reluctant to boast, my dissertation won a prize for the best qualifying terms in a title. (In the subtitle of my dissertation, I conveyed this as: “A pre-preliminary, tentative, exploratory pilot study©.”)

In some cases, terms are added to the study such as, “A Controlled Investigation,” which moves our expectation in the other direction, namely, that the present study is somehow well conducted and controlled, and perhaps by implication stands in contrast to other studies in the field (or in the author’s repertoire). Usually words noting that the investigation is controlled are not needed unless this is truly a novel feature of research on the topic. Some words when added can be important because they are novel. An example would be the subheading, “A replication.” That is important because replications are of interest and not too often published. They have taken on even increased importance given the concerns in science that many studies produce findings that are not replicable. Another word to add as a subtitle might be: A review or A meta-analysis. These are important to convey that the article is not an individual investigation but an evaluation of a broad literature.

Occasionally authors are wont to use titles with simple questions, “Is depression really a detriment to health?” or “Is childhood bullying among boys a predictor of domestic violence in adulthood?” In general, it is advisable to avoid “yes, no” questions in the title. Science and findings are often nuanced and findings are likely to be both yes and no but under very different circumstances or for some subgroups of people but not for others. As an example, consider a hypothetical yes–no question for the title of a study as, “Is cigarette smoking bad for one’s health?” For anyone on the planet, the answer might be a resounding yes. Yet, the yes–no nature of the question makes this a poor choice of title because the answer is likely to depend on either how smoking is defined (e.g., how much smoking – a cigarette a year, a pack after each meal) and how health is defined (e.g., mental, physical, what diseases, disorders). Very familiar is how horrible smoking is for one’s physical health in so many domains (e.g., heart disease, cancer, chronic respiratory disease), but the question in the title can be answered both yes and no. Less familiar is the fact that cigarette smoking and exposure to cigarette smoke (among nonsmokers) reduce the risk for Parkinson’s disease and there are reasonable explanations for that based on brain chemistry and neurotransmitters (Reference Ma, Liu, Neumann and GaoMa et al., 2017; Reference Miller and DasMiller & Das, 2007). Clearly, the hypothetical title is plainly simplistic and not very helpful or informative because we can show many circumstances in which yes and no are correct answers. I am not arguing in favor of cigarette smoking (although I used to be a chain smoker until I switched to cigarettes). I am advising against titles of empirical articles that have a yes–no question given that most answers involve essays. Few phenomena allow the simplistic thinking the question can reflect.

2.2 Abstract

The Abstract is likely to be read by many more people than is the full article. The Abstract will be entered into various databases and be accessible through Internet and online searches. Many journals list the tables of contents for their issues and provide free access on the Web to abstracts of the articles but charge for the full article. Consequently, the Abstract is the only information that most readers will have about the study. For reviewers of the manuscript and readers of the journal article, the Abstract conveys what the author studied and found. Ambiguity, illogic, and fuzziness here are ominous. Thus, the Title and Abstract are sometimes the only impression or first impression one may have about the study.

Obviously, the purpose of the Abstract is to provide a relatively brief but comprehensive statement of goals, methods, findings, and conclusions of the study. Critical methodological descriptors pertain to the participants and their characteristics, experimental and control groups or conditions, design, and major findings. Often space is quite limited; indeed a word limit (e.g., 150–250 words maximum) may be placed on the abstract. It is useful to deploy the words to make substantive statements about the characteristics of the study and the findings, rather than to provide general and minimally informative comments. For example, vacuous statements (“Implications of the results were discussed” or “Future directions for research were suggested”) ought to be replaced with more specific comments of what one or two implications and research directions are (e.g., “The findings suggest that the family and peers might be mobilized to prevent drug abuse among adolescents and that cultural influences play a major role.”). Also, the more specific comments can convey the study’s relevance and interest value beyond what is suggested by the manuscript title or the opening comments of the Abstract. As a reader, I am not going to read very eagerly an article with the vacuous “implications” or “future directions” sentences, but if I am interested in the specific topics mentioned as implications (brain activity, the immune system, family, peers, culture), this article is a must for me to read. As authors, we often lament the word restrictions placed on us in the Abstract, but the first task is to make sure we are using the existing allotment with maximum information.

2.3 Introduction

The Introduction is designed to convey the overall rationale and objectives. The task of the author is to convey in a crisp and concise fashion why this study is needed and the current questions or deficiencies the study is designed to address. The section should not review the literature in a study-by-study fashion, but rather convey issues and evaluative comments that set the stage for the study. Placing the study in the context of what is and is not known (contextualization) and the essential next step in research in the field requires mastery of the pertinent literatures, apart from reasonable communication skills. Ironically, mastery of the literature is needed so the author knows precisely what to omit from the Introduction. A vast amount of material one has mastered and that is very interesting will need to be omitted because it does not set the stage or convey the precise context for this study.

Saying that the study is important (without systematically establishing the context) or noting that no one else has studied this phenomenon (measure or sample) usually are feeble attempts to short-circuit the contextualization of the study. In a manuscript I reviewed, the author mentioned four times in the Introduction (and three more in the Discussion) that this was the first time this study has been done. This was not amusing. Someone had not advised or helped the author very much and a very poor case was made for the study. Among the tasks of the Introduction is to lead the reader to the conclusion that the study is important and worthwhile. Telling the reader that the study is important is an argument from authority and that is not how science works. This might even strongly suggest that the author has not done his or her contextualization homework.

It may be relevant to consider limitations of previous work and how those limitations can be overcome. These statements build the critical transition from an existing literature to the present study and the rationale for design improvements or additions in relation to those studies. It is important to emphasize that “fixing limitations” of prior work is not necessarily a strong basis for publishing a study. The author must convey that the limitations of prior work are central to a key building block in theory or the knowledge base. Convey that because of that specific limitation, we really do not know what we thought we did or that there is a new ambiguity that is important but hidden in prior studies considering what was studied and by what means. Alternatively, the study may build along new dimensions to extend the theory and constructs to a broader range of domains of performance, samples, and settings. The rationale for the specific study must be very clearly established. Theory and previous research usually are the proper springboard to convey the importance of the current study.

In general, the Introduction will move from the very general to the specific. The very general refers to the opening of the Introduction that conveys the area, general topic, and significance of a problem. For example, in studies of diagnosis, assessment, treatment, or prevention of clinical dysfunction, the Introduction invariably includes a paragraph to orient the reader about the seriousness, prevalence or incidence, and economic and social costs of the disorder. Reviewers of the manuscript are likely to be specialists in the area of the study and hence know the context very well. Yet, many potential readers would profit from a statement that conveys the significance, interest, and value of the main focus of the study.

After the initial material, the Introduction moves to the issues that underlie this specific study. Here the context that frames the specific hypotheses of the study are provided and reflect theory and research that are the impetus for the investigation. There is an introduction syllogism, as it were, a logic that will lead the reader from previous theory and research to the present study with a direct path. Extended paragraphs that are background without close connections to the hypotheses of the study serve as a common weakness of manuscripts rejected for publication.

The Introduction does not usually permit us to convey all the information we wish to present. In fact, the limit is usually 2–5 manuscript pages. A reasonable use of this space is in brief paragraphs or implicit sections that describe the nature of the problem, the current status of the literature, the extension to theory and research this study is designed to provide, and how the methods to be used are warranted. The penultimate or final paragraph of the Introduction usually includes a statement of the purpose of the study and the specific hypotheses and predictions. By the time the reader reaches this paragraph or set of paragraphs, it should be very clear that these hypotheses make sense, are important, and address a critical issue or need in the knowledge base. In short, the Introduction must establish that the study addresses a central issue. To the extent that the author conveys a grasp of the issues in the area and can identify the lacunae that the study is designed to fill greatly improves the quality of the report and the chances of acceptance for journal publication. By the time the readers arrive at the purpose of the study or hypotheses paragraph, they should be nodding enthusiastically and saying to themselves, “This study is really needed, it should have been done years ago, I am so glad this is being done now.” As authors we often believe a description of the study is all that is needed. The identical study (description of what was done) can be viewed as a weak and just another study or strong, compelling, and sorely needed. All this can be decided by how the Introduction is cast.

2.4 Method

This section of the paper encompasses several points related to who was studied, why, and how. The section not only describes critical procedures, but also provides the rationale for methodological decisions. Subject selection, recruitment, screening, and other features ought to be covered in detail. Initially, the subjects or clients are described. Why was this sample included and how is this appropriate to the substantive area and question of interest? In some cases, the sample is obviously relevant because participants have the characteristic of interest (e.g., parents accused of child abuse, siblings of children with autism spectrum disorder) or are in a setting of interest (e.g., daycare center, wilderness camp). In other cases, samples are included merely because they are available. Such samples, referred to as samples of convenience, may include college students or a clinic population recruited for some other purpose than to test the hypotheses of this study. The rationale for the sample should be provided to convey why this sample provides a good – or if not good, a reasonable – test of the hypotheses and whether any special features may be relevant to the conclusions.

The design is likely to include two or more groups that are treated in a specific fashion. The precise purpose of each group and the procedures to which they are exposed should be clarified. Control groups should not merely be labeled as such with the idea that the name is informative. The author should convey precisely what the group(s) is designed to control. The author is advised to identify the critical methodological concerns and to convey how these are controlled in the design. Plausible threats to experimental validity that are uncontrolled deserve explicit comment to arrest the reasonable concerns of the reviewers (see Reference KazdinKazdin, 2017).

Several measures are usually included in the study. Why the constructs were selected for study should have been clarified in the Introduction. Then the specific measures and why they were selected to operationalize the constructs should be presented in the Method section. Information about the psychometric characteristics of the measures is often highlighted. This information relates directly to the credibility of the results. Apart from individual assessment devices, the rationale for including or omitting areas that might be regarded as crucial (e.g., multiple measures, informants, settings) deserves comment.

Occasionally, ambiguous statements may enter into descriptions of measures. For example, measures may be referred to as “reliable” or “valid” in previous research, as part of the rationale for their use. There are, of course, many different types of reliability and validity. It is important to identify those characteristics of the measure found in prior research that are relevant to the present research. For example, high internal consistency (reliability) in a prior study may not be a strong argument for use of the measure in a longitudinal design where the author cares more about test–retest reliability. Even previous data on test–retest reliability (e.g., over 2 weeks) may not provide a sound basis for repeated testing over annual intervals. The author ought to present information to convey the suitability of the measures for the study.

It often appears that reliability and validity of assessment are not routinely taught, at least if one looks at Method sections of articles in clinical psychology. These are important concepts because they can determine what is measured by a given instrument and how well. One sees more routinely that authors report Cronbach’s alpha for a measure and then move on. Alpha is one measure of reliability (internal consistency) and can be very useful to know. However, this has little to do with validity of the measure and by itself is not a justification for using a specific measure without much more explanation. Perhaps add a couple of sentences in this section to comment specifically on reliability and validity and what types have been supported in prior research. This is not merely to convince a reader but also ourselves on the wisdom of electing this measure. It is unreasonable to expect the measures to have the ideal reliability and validity data that the investigator would like to make a flawless case for use of these measures. Yet, make the case from what psychometric data there are. If data are not available, include some analyses in the study to suggest the measure(s) behave in ways that suggest pertinent forms of reliability or validity (Reference KazdinKazdin, 2017).

2.5 Results

It is important to convey why specific statistical tests were selected and how these serve the goals of the study. A useful exercise is for the investigator to read that paragraph about hypotheses and predictions from the Introduction and then immediately start reading the Results section, i.e., for the moment completely bypass the Methods. The results ought to speak directly to and flow from that narrative statement in the Introduction.

Analyses often are reported in a rote fashion in which, for example, the main effects are presented and then interactions for each measure. The author presents the analyses in very much the same way as the software output. Similarly, if several dependent measures are available, a set of analyses is automatically run (e.g., omnibus tests of multivariate analyses of variance followed by univariate analyses of variance for individual measures). The tests may not relate to the hypotheses, predictions, or expectations outlined at the beginning of the paper. It is important that the statistical tests be seen and presented as tools to answer questions or enlighten features of those questions and to convey this to the reader. The reader should not be able to legitimately ask, “Why was that statistical test done?” Knowledge of statistics is critical for selecting the analyses to address the hypotheses and conditions met by the data. Yet, as important in the presentation is to convey precisely why a given statistical test or procedure is suitable to test the hypotheses and then again what the results of that test reveal in relation to those hypotheses.

It is often useful to begin the Results by presenting basic descriptors of the data (e.g., means, standard deviations for each group or condition), so the reader has access to the numbers themselves. The main body of the Results is to test the hypotheses or to evaluate the predictions. Organization of the Results (subheadings) or brief statements of hypotheses before the analyses are often helpful to prompt the author to clarify how the statistical test relates to the substantive questions and to draw connections for the reader.

Several additional or ancillary analyses may be presented to elaborate the primary hypotheses. For example, one might be able to reduce the plausibility that certain biases may have accounted for group differences based on supplementary or ancillary data analyses. Ancillary analyses may be more exploratory and diffuse than tests of primary hypotheses. Manifold variables can be selected for these analyses (e.g., sex, race, height differences) that are not necessarily conceptually interesting in relation to the goals of the study. The author may wish to present data, data analyses, and findings that were unexpected, were not of initial interest, and were not the focus of the study. The rationale for these excursions and the limitations of interpretation are worth noting. From the standpoint of the reviewer and reader, the results should make clear what the main hypotheses were, how the analyses provide appropriate and pointed tests, and what conclusions can be reached as a result. As in other portions of the manuscript, how the author has reached a decision (what analysis) and why are very important.

2.6 Discussion

The Introduction began with a statement of the need for this study and issues or lacunae in theory or research the study was designed to address. The Discussion continues the storyline by noting what we know now and how the findings address or fulfill the points noted previously. With the present findings, what puzzle piece has been added to the knowledge base, what new questions or ambiguities were raised, what other substantive areas might be relevant for this line of research, and what new studies are needed? From the standpoint of contextualization, the new studies referred to here are not merely those that overcome methodological limitations of the present study, but rather focus on the substantive next steps for research. Also, this is not the place for vacuous suggestions such as, “This study needs to be replicated with people who are …”. If one is suggesting an extension of the study to different subjects, settings, or other dimensions, specify exactly why this specific extension would be of special interest.

The Discussion usually includes paragraphs to provide an overview of the major findings, integration or relation of these findings to theory and prior research, limitations and ambiguities and their implications for interpretation, and future directions. These are implicit rather than formally delineated sections and the author ought to consider the balance of attention to each topic. Usually, the Discussion is completed within 3–5 manuscript pages.

Description and interpretation of the findings can raise a tension between what the author wishes to say about the findings and their meaning versus what can be said in light of how the study was designed and evaluated. It is in the Discussion that one can see the interplay of the Introduction, Methods, and Results sections. For example, the author might draw conclusions that are not quite appropriate given the method and findings. The Discussion may convey flaws, problems or questionable methodological decisions within the design that were not previously evident. That is, the reader of the manuscript can now state that if these are the statements the author wishes to make, the present study (design, measures, or sample) is not well suited. The slight mismatch of interpretative statements in the Discussion and Methods is a common, albeit tacit basis for not considering a study as well conceived and executed. A slightly different study may be required to support the specific statements the author makes in the Discussion. It is important to be precise about what can and cannot be asserted in light of the design and findings.

It is usually to the author’s credit to examine potential limitations or sources of ambiguity of the study. A candid, non-defensive appraisal of the study is very helpful. Here too, contextualization may be helpful because limitations of a study also are related to the body of prior research, what other studies have and have not accomplished, and whether a finding is robust across different methods of investigation. Although it is to the author’s credit to acknowledge the limitations of the study, there are limits on the extent to which reviewers grant a pardon for true confessions. At some point, the flaw is sufficient to preclude publication, whether or not the author acknowledges it. For example, the authors of the study might note, “A significant limitation of the present study is the absence of a suitable control group. We are aware that this might limit the strength of the conclusions.” Awareness here does not strengthen the demonstration itself. A huge limitation in the study is sufficiently damaging to preclude drawing valid inferences.

In noting the limitations of the study, there is a useful structure for the presentation. First note the limitation. Then discuss to the extent reasonable that this limitation is not likely to influence the conclusion (if this is the case). If the role of the limitation cannot be diminished or dismissed by sound reasoning or related data, note that addressing this issue is a logical if not important next step for research. All studies have limitations by their very nature so reasoning about their likely and unlikely impact on the findings is invariably relevant.

At other points, acknowledging potential limitations conveys critical understanding of the issues and guides future work. For example, in explaining the findings, the author may note that although the dependent measures are valid, there are many specific facets of the construct of interest that are not covered. Thus, the results may not extend to different facets of the construct as measured in different ways. Here too it is useful to be specific and to note precisely why other constructs and their measure might show different results. In short, be specific as to why a limitation or point might really make a difference. This latter use of acknowledgment augments the contribution of the study and suggests concrete lines of research.

3. Questions to Guide Manuscript Preparation

The section-by-section discussion of the content of an article is designed to convey the flow or logic of the study and the interplay of description, explanation, and contextualization. The study ought to have a thematic line throughout and all sections ought to reflect that in a logical way. The thematic line consists of the substantive issues guiding the hypotheses and decisions of the investigator (e.g., about procedures and analyses) that are used to elaborate these hypotheses. I mentioned that one way to check this is to read sections together like the Introduction and Results (by skipping the Method section). These sections ought to follow a similar flow. Analyses should be connected logically but with sentences about what is being tested and what was found in relation to the ideas or hypotheses presented in the Introduction. Skipping the Method section for this reading helps one to consider the flow. Similarly, one could push this further and read the Introduction and then Discussion – are they connected? The opening of the Discussion can address issues that were written at the end of the Introduction, i.e., the purpose of this study. This is not a repeat of the purpose but a summary of the main results that addressed those purposes and goals. All these little tools are designed to help us as authors convey a thematic and logical flow that the reader can easily see.

A more concrete and hence perhaps more helpful way of aiding preparation of the manuscript is to consider our task as authors as that of answering many questions. There are questions for the authors to ask themselves or, on the other hand, questions reviewers and consumers of the research are likely to ask as they read the manuscript. These questions ought to be addressed suitably within the manuscript. Table 13.1 presents questions according to the different sections of a manuscript. The questions emphasize the descriptive information, as well as the rationale for procedures, decisions, and practices in the design and execution. The set of questions is useful as a way of checking to see that many important facets of the study have not been overlooked. As a cautionary note, the questions alert one to the parts rather than the whole; the manuscript in its entirety or as a whole is evaluated to see how the substantive question and methodology interrelate and how decisions regarding subject selection, control conditions, measures, and data analyses relate in a coherent fashion to the guiding question.

Table 13.1 Major questions to guide journal article preparation

Abstract

  • What are the main purposes of the study?

  • Who was studied (sample, sample size, special characteristics)?

  • How were participants selected and assigned to conditions?

  • To what conditions, if any, were participants exposed?

  • What type of design was used?

  • What are the main findings and conclusions?

  • What are one or two specific implications or future directions of the study?

Introduction

  • What is the background and context for the study?

  • What in current theory or research makes this study useful, important, or of interest?

  • What is different or special about the study in focus, methods, or design to address a need in the area?

  • Is the rationale clear regarding the constructs (independent and dependent variables) to be assessed?

  • What specifically are the purposes, predictions, or hypotheses?

  • Are there ancillary or exploratory goals that can be distinguished as well?

Method

  • Participants

  • Who are the participants and how many of them are there in this study?

  • Why was this sample selected in light of the research goals?

  • How was this sample obtained, recruited, and selected?

  • What are the subject and demographic characteristics of the sample (e.g., sex, age, ethnicity, race, socioeconomic status)?

  • What, if any, inclusion and exclusion criteria were invoked, i.e., selection rules to obtain participants?

  • How many of those subjects eligible or recruited actually were selected and participated in the study?

  • In light of statistical power considerations, how was the sample size determined?

  • Was informed consent solicited? How and from whom (e.g., child and parent), if special populations were used?

  • If non-human animals are the participants, what protections were in place to ensure their humane care and adherence to ethical guidelines for their protection?

  • Are they any professional, personal, or business interests or connections, financial or otherwise (e.g., service on boards) that might be or be perceived as a conflict of interest in relation to the focus of the study or direction of the findings?

Design

  • What is the design (e.g., group, true-experiment) and how does the design relate to the goals?

  • How were participants assigned to groups or conditions?

  • How many groups were included in the design?

  • How are the groups similar and different?

  • If groups are “control” groups, for what is the group intended to control?

  • Why are these groups critical to address the questions of interest?

Procedures

  • Where was the study conducted (setting)?

  • What measures, materials, equipment, or apparatus were used?

  • What is the chronological sequence of events to which participants were exposed?

  • What intervals elapsed between different aspects of the study (e.g., assessment, exposure to the manipulation, follow-up)?

  • If assessments involved novel measures created for this study, what data can be brought to bear regarding pertinent types of reliability and validity?

  • What checks were made to ensure that the conditions were carried out as intended?

  • What other information does one need to know to understand how participants were treated and what conditions were provided to facilitate replication of this study?

Results

  • What are the primary measures and data upon which the hypotheses or predictions depend?

  • What analyses are to be used and how specifically do these address the original hypotheses and purposes?

  • Are the assumptions of the statistical analyses met?

  • If multiple tests are used, what means are provided to control error rates (increased likelihood of finding significant differences in light of using many tests)?

  • If more than one group is delineated (e.g., through experimental manipulation or subject selection), are they similar on variables that might otherwise explain the results (e.g., diagnosis, age)?

  • Are data missing due to incomplete measures (not filled out completely by the participants) or due to loss of subjects? If so, how are these handled in the data analyses?

  • Are there ancillary analyses that might further inform the primary analyses or exploratory analyses that might stimulate further work?

Discussion

  • What are the major findings of the study?

  • Specifically, how do these findings add to research and support, refute, or inform current theory?

  • What alternative interpretations, theoretical or methodological, can be placed on the data?

  • What limitations or qualifiers are necessary, given methodology and design issues?

  • What research follows from the study to move the field forward?

  • Specifically, what ought to be done next (e.g., next study, career change of the author)?

More generally

  • What were the sources of support (e.g., grants, contracts) for this specific study?

  • If there is any real or potentially perceived conflict of interest, what might that be?

  • Are you or any coauthors or a funding agency likely to profit from the findings or materials (e.g., drugs, equipment) that are central to the study?

Note: These questions capture many of the domains that ought to be included, but they do not exhaust information that a given topic, type of research, or journal might require. Even so, the questions convey the scope of the challenge in preparing a manuscript for publication.

4. Guidelines for Research
4.1 Impetus for Reporting Guidelines

There have been a long-series of guidelines on how to conduct and report research and these are directly related to preparation of a study for publication. The history includes special emphasis on ethical treatment of participants. Regulations followed in response to atrocities of the Nazi regime during World War II and the resulting development of the Nuremburg Code (1940). Since then, many other codes have developed (e.g., Declaration of Helsinki of the World Medical Association, Belmont Report), beyond the scope of the present discussion (see Reference KazdinKazdin, 2017). Protection of participant rights remain as important as ever and even of greater concern considering new opportunities to obtain and combine data sources (“big data”), often with information that is public in some way (e.g., medical records, social media, tracking locations and purchases). Individuals may not be aware of the collection and use of the information. Even when participants may be anonymous, in fact often groups (e.g., by ethnic, culture or setting) can be readily singled out and identified in ways that can reflect quite negatively on them (e.g., Reference Metcalf and CrawfordMetcalf & Crawford, 2016; Reference ZimmerZimmer, 2010).

The need for guidelines for reporting research has emerged from multiple additional concerns. First, collaborative research currently is more the rule rather than the exception in science. Collaborations often involve scores of authors, from multiple disciplines, and from many different countries. There is interest across nations in reaching common standards in relation to the openness of research, access to information, the merit-review process, and ethical issues (e.g., Reference SureshSuresh, 2011). That has provided a critical context for providing guidelines for research and the reporting of research that span multiple disciplines, countries, and journals.

Second, lapses in what is reported in research have been well documented. For example, information often is omitted such as exactly who the participants are (e.g., subject and demographic variables) and how they were recruited, who administered treatment or experimental procedures, the extent of their training, whether the integrity or execution of treatment was assessed, fundamental characteristics of the data evaluation, and more, as reflected in citations noted previously.

Third, selective reporting of results and data analyses has been raised as a critical issue that introduces biases in individual studies and entire literatures. The selective reporting of results of some data analyses or some of the dependent measures can increase the likelihood of more chance findings in the literature (Reference Simmons, Nelson and SimonsohnSimmons et al., 2011). For example, in identifying evidence-based treatment, often there is a clear bias in how authors report the data by not presenting the full range of measures, some of which would not support conclusions about the impact of treatment (Reference De Los Reyes and KazdinDe Los Reyes & Kazdin, 2008). Guidelines are intended to foster consistency and clarity in how the study will be reported to minimize the biases in reporting that emerge.

Fourth, more flagrant than “mere” omission of information and selective reporting has been fraud and fabrication of data in science. Fraud is not new in science. However, both the visibility of fraud to the public, including the circulation through social media, and direct and disastrous implications from fraudulent studies have been more evident than ever before (see Reference Levelt and CommitteesLevelt, Noort, & Drenth Committees, 2012; Reference Watanabe and AokiWatanabe & Aoki, 2014).Footnote 2

Finally, there has been renewed concern about the replicability of research. Replication as a general tenet, if not practice, has always been the backbone of science. Given many of the points I have already mentioned (publication biases, statistical analyses, and selective reporting), various authors have reached the dramatic conclusion, occasionally supplemented with mathematical proofs and simulations, that that many and even most published research findings are not correct, i.e., are false (see Reference FrancisFrancis, 2012; Reference IoannidisIoannidis, 2005; Reference Moonesinghe, Khoury and JanssensMoonesinghe et al., 2007). Several calls for increased replication have been made. Psychology has taken the lead in calling for and supporting replications and underscoring the importance of transparency of procedures (Center for Open Science, https://cos.io/). Central to replication, of course, is making the procedures explicit and the materials and results available. Guidelines for conducting research to increase the likelihood that a study can be replicated are obviously important. Many journals, national and international, require providing information about a study and the data so they are freely available to others to facilitate re-evaluation of the data and replication of the entire study.

Overall, science has come under increased scrutiny both from within the sciences, government, and the public at large. Even though the assorted problems I highlighted are seemingly infrequent, the circulation of information (e.g., Web, news and social media) is more extensive than ever before and retractions (when authors and journals make some effort to “take back” and renounce what was published) are more visible and available as well. And news media more routinely comment on scientific findings and reflect skepticism about replication and replicability of effects (e.g., Reference LehrerLehrer, 2010). The points I have raised have served in part as the impetus for improving research, especially focusing on transparency and accountability of investigators. Guidelines have been helpful in fostering greater consistency in reporting of the research and in the process sensitizing researchers of what to attend to in advance of a study.

4.2 Sample Guidelines Briefly Noted

Several organizations and groups have developed standards for reporting research and in the process convey the need to address many of the issues I have highlighted previously. The scope of guidelines that are available is enormous. An international umbrella organization that collects, oversees, and promotes the use of research guidelines is the Equator Network. The network maintains a comprehensive database of guidelines, numbering over 400, as of this writing (www.equator-network.org/reporting-guidelines/). The network is nicely organized by type of paper (guidelines for empirical studies, literature reviews, meta-analyses) and by different methodologies (e.g., randomized trials, observational studies), and so on. With so many guidelines, with enormous overlap in what they cover, one can see this is not a minor movement to improve research.

Examples of such standards are the:

Most of the guidelines include some combination of checklists, flow charts, and narrative explanations of what specific items are to be included in a report and what the information is designed to accomplish. I mention two briefly.

First, the CONSORT standards, mentioned above, are arguably the most familiar set of guidelines. They have been adopted by hundreds of professional journals encompassing many disciplines and countries (see www.consort-statement.org/). The CONSORT guidelines have been devised primarily for clinical trials in medical research but have extended well beyond that and are routinely used in clinical trials of psychosocial interventions. As noted in the most recent version, clinical trials have a history of omitting significant information such as description of who was included in the study; sample size calculation (e.g., why a specific size was included in relation to statistical power issues), descriptions of procedures; and presentation of procedures (e.g., randomization) that were not really invoked; and so on with other lapses (Reference Moher, Schulz and AltmanMoher et al., 2001). Beginning in the early 1990s, efforts began to make recommendations for reporting of studies and from that the CONSORT guidelines emerged.

The guidelines consist of a checklist of essential items that ought to be included in any randomized controlled trial of treatment. The checklist displays what is needed, but along with the checklist is a detailed explanation of the items and their rationale for inclusion (Reference Moher, Schulz and AltmanMoher et al., 2001). In addition, the website provides educational material and a database of materials related to reporting of randomized controlled trials (e.g., examples from real trials). In preparation of journal articles, the CONSORT criteria include a list of what to cover and how. This is more concrete than my general statements of ensuring there is a logical flow and underlying theme to the journal article. Yet, the details are basic and critical and hence these guidelines are valuable and widely adopted by many journals.

Second, ClinicalTrials.Gov (https://clinicaltrials.gov/) provides another model to guide research. This consists of preregistration of a study that requires authors to convey their plan for conducting research and analyzing the data. Preregistration allows for the range of participants in research (investigators, peer reviewers, journal editors, funding agencies, policy makers, the public at large) to determine whether the research, when completed, has deviated from the pre-registered plan. Pre-registration of research is now common across many funding agencies and journals (Reference Nosek, Ebersole, DeHaven and MellorNosek et al., 2018). ClinicalTrials.Gov is a large database that includes privately and publicly funded studies of investigations throughout the world. Indeed, this is the largest clinical trials database and as of this writing over 327,000 studies are registered and include studies from all 50 states in the United States and 209 countries (as of January 2020). When clinical trials are comparing interventions or an intervention against a control group, funding agencies (e.g., National Institutes of Health), organizations (e.g., World Health Organization), and a consortium of journal editors (the International Committee of Medical Journal Editors) require investigators to register their clinical trials in advance of the study.

In registration of one’s study, information covers diverse facets of the project. Indeed, there is a multi-page template that includes identification of the investigators of the study, the design, what the interventions are, what will be the outcome criteria (e.g., primary and secondary), the number of anticipated subjects, criteria for inclusion of the subjects, status of procedures to protect clients, and much more. Merely mentioning some of the domains that are included in the register does not underscore their significance in the conduct and reporting of the study. Consider three examples to convey the point.

First, the guidelines require specification of the outcome criteria and which outcomes or measure will be primary and secondary. This is a pre-commitment of the investigator to be clear about the outcome. This does not mean that investigators cannot look at all outcome measures or derive new ones based on interesting findings, pre-specified or not, as the study is completed. However, pre-specification can reduce the tendency in written reports to underscore, emphasize, and consider as primary, those measures that “come out,” i.e., support the hypotheses.

Second, in many studies there are multiple investigators whose roles vary in the design, execution, analysis and other facets of the study and these investigators are likely to be listed as authors. Specification of the investigators and their roles clarifies accountability for the final manuscript. Also, this requires that people in fact have a role in the study before being included. All the expected dynamics of human interactions (e.g., who does and does not get to be an author, where they are placed in the list of authors) and human emotions (e.g., indignation, disappointment, rage, helplessness) surround authorship. War stories here could fill volumes. The guidelines can help a little. At the end of a study, there is accountability of who is in charge, who had what role, and who was involved. If there is fraud or faked data or questionable practices or a manuscript retraction (once there is a question about practices or a finding), the team involved in the study and their roles can be delineated. This can enhance the integrity of the research progress by making clear that one is accountable for the study and its conduct. Third, the guidelines specify whether, how, and where the data will be stored and whether other materials critical to the study will be available. Occasionally journals or funding agencies require that the data are deposited and made available. This practice fostered by the guidelines increases the transparency of the research but also helps replication efforts.

A few comments in passing. To begin, pre-registration does not fix the research so that no further changes can be made. In fact, one can update changes that occur in the course of a study. “Pre”-registrations can be updated after participant enrollment or even after data collection has begun to document any changes that occur in the course of a study (Reference DeHavenDeHaven, 2017; Reference Nosek, Ebersole, DeHaven and MellorNosek et al., 2018). All that is required is to make sure the changes are clear, transparent, and explained. The registration still thwarts post-hoc decision making based on how the data come out or switching some measures and ignoring others, some of the sins of research. An additional point, the many guidelines are designed to improve reporting of research. However, so many facets need to be considered ahead of time in these guidelines that they necessarily influence and guide the design of a study. This article is about preparing a manuscript for journal publication. Consulting and following many widely adopted guidelines underscores the point that key issues about the publication of a journal article emerge before the first subject is run in the study.

5. Selecting a Journal

Preparation of the manuscript logically occurs before selecting a journal and submitting to the journal for publication. However, investigators occasionally have the journal or a couple of journals in mind before the manuscript is prepared. Journals have different emphases and research with specific foci (e.g., theory, application), samples (e.g., non-human animals, college students, community samples), settings (laboratory, field), and research designs (cross-sectional, longitudinal, experimental, observational). Consequently, it is not odd for the investigator to plan/hope that a study when completed will be appropriate for a journal he or she targeted well before preparing the manuscript for publication. In my own case, I prefer to see the final or almost final write up to consider what journals might be reasonable outlets for the article. I mention selecting a journal here on the assumption that this logically follows in the sequence of completing a study, preparing the write up, and submitting the article for publication. Selecting a journal is part of submitting the article.

Thousands of journals are available in the behavioral and social sciences and the resources and potential relevance to your study are easily obtained from the Web (Reference GuntherGunther, 2011; Thomson Reuters, 2011; Reference ThursbyThursby, 2011). These sources can be searched by topic and keywords in relation to how you view your study (e.g., clinical psychology, candidate for Nobel prize). It is beneficial to skip the search among the thousands of journals and begin the search more narrowly. There are many professional organizations within psychology that have their own publications. The two major professional organizations whose journal programs are widely recognized and emulated are American Psychological Association (APA, 2020a) and the Association for Psychological Science (2020).

Each source I have noted here provides information about the editorial policy, content area or domain, type of article (e.g., investigations, literature reviews, case studies), guidelines for manuscript preparation, and access to tables of contents of current and past issues. I have emphasized journals in the English language. Psychology is an active discipline internationally and psychological associations in many countries and regions (e.g., European Union, Scandinavia) have many excellent journals as well.

Many criteria are invoked to select a journal to which one will submit a manuscript, including the relevance of the journal in relation to the topic, the prestige value of the journal in an implicit hierarchy of journals in the field, the likelihood of acceptance, the breadth and number of readers or subscribers, and the discipline and audience one wishes to reach (e.g., psychology, psychiatry, medicine, social work, health, education). As for the prestige value, clearly some journals are regarded as more selective than others. For example, some of the APA journals are premier journal outlets in their respective areas (e.g., Journal of Consulting and Clinical Psychology, Journal of Personality and Social Psychology). Yet, journals from other organizations, journals not sponsored by an organization, and journals from other professions or disciplines can be as or more highly regarded. Indeed, in some areas (e.g., behavioral neuroscience), some of the most discriminating and selective publication outlets are not psychology journals (Science, Nature Neuroscience). One can identify the best outlets by familiarity with the literature (e.g., where do the best studies seem to be published) and by chatting with colleagues.

Word of mouth and reputation of a journal often are well recognized and their status within professional organizations is known. There has been an enduring interest in having more objective measures and they are available. The impact of a journal is primary among these measures (Web of Science, 2020) and includes the extent to which articles in a journal are cited by others. Journals with articles that are heavily cited are those with much higher impact. Information is available for journals in virtually all areas of science. Within the social sciences alone over 3400 journals are covered. There are reasons not to be wedded to journal impact.Footnote 3 The impact of one’s work is very important, and it appears that is not really related to the journal impact measure.

Some journals are not very selective and, indeed, must hustle (e.g., invite, accept many) articles so they can fill their pages. Indeed, it is not difficult at all to get one’s work published in the genre referred to as predatory journals (e.g., Reference BrainardBrainard, 2020). These are journals that send countless emails to professionals seeking their manuscripts and with little and sometimes no evaluation of merit. The journals are primarily business ventures and charge high author fees. The journal landscape is intricate because some journals with a peer-review process offer the option of open access (article available to anyone on line) if the author pays a publication fee.

Be a little wary of journals in psychology that charge authors for publishing their papers. For these journals, when one’s paper is accepted, the author is charged based on how many journal pages the article will require. These outlets do not necessarily take all submissions, but they often take most. These journals tend not to be as carefully peer-reviewed and hence publications in such journals are commensurately much less well regarded. Within psychology, career advice is to focus on peer-reviewed and well-regarded journals, leaving aside other issues (e.g., who publishes the journal, whether there are charges). Knowledge of the area of research, journal citation impact, and contact with one’s colleagues can readily identify the ideal outlets for one’s research. Early in my career, I asked a senior colleague about a journal and he gave me a sharp NEVER publish there. Decades later I can see that was sound advice. If in doubt, seek advice. If you have no doubts but you are early in your career, perhaps also seek advice.

The audience one wishes to reach may be a critical and indeed primary consideration in selecting a journal. Who might be interested in this study (beyond blood relatives)? One way to answer this is to consider the Reference section of one’s article. Are one or two journals emphasized in the Reference section of the manuscript? If so, one of these journals might be the most appropriate outlet. Citation of the journal on multiple occasions indicates that the journal publishes work on the topic and readers who are likely to be interested in the topic are also likely to see the article. Also relevant, journals vary markedly in their readership and subscription base. Some journals have relatively few subscribers (e.g., 200–600 up to several thousand) or are omitted from easily accessed databases. The visibility of one’s study and the chance that others will see it are influenced by these considerations. Fortunately, most professional journals have their abstracts included in databases that can be accessed from the Web. This makes even the most obscure study accessible.

Most journals are in print (hard copy) and electronic form, but many are only Web-based and are sometimes referred to as electronic journals or e-journals. This is not the place to discuss that topic except to note often publication on the Web is much faster (less delay in review of the manuscript and acceptance of the manuscript) than is publication in a printed journal. There are still dynamic changes in how journals will be published and disseminated and print versions may be on borrowed time. The central issue for one’s career is the extent to which the publication outlet is well regarded by one’s peers and the care with which manuscripts are reviewed before they are accepted and published. Electronic versus printed journal format is not as critical as the quality of the publication. If publication in the journal requires little or no peer review, if most manuscripts are accepted, and if manuscripts are accepted largely as they are (without revision), quality of the research and the value of the publication to one’s career may be commensurately reduced.

6. Manuscript Submission and Review
6.1 Overview of the Journal Review Process

Alas, through careful deliberation and 30 minutes with your coauthor at a Ouija board, you select a journal and are ready to submit your manuscript for publication. Before you do, consult the Instructions to Authors written in the journal to make sure you submit the manuscript correctly. Usually manuscripts are submitted through a journal portal, i.e., electronically, in which the manuscript file and a letter of submission are uploaded to the journal website. In some cases, you may be required to include sentences or paragraphs in the letter you submit that say this study is not being considered elsewhere in another journal, has not been published before, has met ethical guidelines specified by university or institutional policy and various laws, and that you will give the copyright to the publisher if the manuscript is accepted. Processing of the manuscript could be delayed if your letter does not meet the guidelines provided in the journal.

Once the manuscript is submitted, the journal editor usually sends the electronic file to two or more reviewers who are selected because of their knowledge and special expertise in the area of the study or because of familiarity with selected features of the study (e.g., novel methods of data analyses). Reviewers may be selected from the names of authors whose articles you included in your Introduction. Some reviewers are consulting editors who review often for the journal and presumably have a perspective of the type and quality of papers the journal typically publishes; other reviewers are ad-hoc reviewers and are selected less regularly than consulting editors. Reviewers are asked to evaluate the manuscript critically and to examine whether or the extent to which:

  • The question(s) is important for the field;

  • The design and methodology are appropriate to the question;

  • The results are suitably analyzed;

  • The interpretations follow from the design and findings; and

  • The knowledge yield contributes in an incremental way to what is known already.

(You may note that these bulleted points encompass the explanation and contextualization features I noted in relation to manuscript preparation. Each point is one that can be readily addressed by the author in preparing the manuscript.) Typically, reviewers are asked to give a summary recommendation (e.g., reject or accept the manuscript). All recommendations to an editor are advisory and not binding in any way. At the same time, the editor sought experts and usually follows their recommendations. Yet reviewers too must make the case for their comments.

Once the paper is reviewed, the editor evaluates the manuscript and the comments of the reviewers. In some cases, the editor may provide his or her own independent review of the paper; in other cases he or she may not review the paper at all but defer to the comments and recommendations of the reviewers. The editor writes the author and notes the editorial decision. Usually, one of three decisions is reached: the manuscript is accepted pending a number of revisions that address points of concern in the reviewers’ comments; the manuscript is rejected and will not be considered further by the journal; or the manuscript is rejected but the author is invited to resubmit an extensively revised version of the paper for reconsideration.

The accept decision usually means that the overall study was judged to provide important information and was well done. However, reviewers and the editor may have identified several points for further clarification and analysis. The author is asked to revise the paper to address these points. The revised paper would be accepted for publication.

The reject decision means that the reviewers and/or editor considered the paper to include flaws in conception, design, or execution or that the research problem, focus, and question did not address a very important issue. For journals with high rejection rates, papers are usually not rejected because they are flagrantly flawed in design. Rather, the importance of the study, the suitability of the methods for the questions, and specific methodological and design decisions conspire to serve as the basis for the decision.

The reject–resubmit decision may be used if several issues emerged that raise questions about the research and the design. In a sense, the study may be viewed as basically sound and important but many significant questions preclude definitive evaluation. The author may be invited to prepare an extensively revised version that includes further procedural details, additional data analyses, and clarification of many decision points pivotal to the findings and conclusions. The revised manuscript may be re-entered into the review process and evaluated again as if it were new. On other occasions, the manuscript may be resent to reviewers familiar with the prior version. Less often the editor may make an executive decision and accept or reject the manuscript without outside input.

Of the three letters, clearly a rejection letter is the most commonly received. Authors, and perhaps new authors in particular, are insufficiently prepared for this feature of the journal publication business.Footnote 4 Journals often publish their rejection rates, i.e., proportion of submitted manuscripts that are rejected, and this figure can be quite high (e.g., 70–90 percent). Often the prestige value of the journal is in part based on the high rejection rate. Yet, the rate is ambiguous at best because of self-screening among potential authors. For example, for very prestigious publication outlets (e.g., Psychological Review, Science) where psychological papers are published, the rejection rates cannot consider the fact that most authors are not likely to even try that outlet if they have a contribution that falls within the topic and format domain. Rejection rates across journals are not directly comparable. Even so, the rates give the would-be author the approximate odds if one enters the fray.

Although beyond our purpose, the review process deserves passing comment. The entire process of manuscript submission, review, and publication has been heavily lamented, debated, and criticized. The peer-review process has a long history as an effort of quality control over the content and standards of what is published (Reference SpierSpier, 2002). The topic is central to science broadly and continues to be assessed, commented on, and evaluated with efforts to alter or improve the processes (e.g., Reference Elson, Huff and UtzElson et al., 2020; Reference Kirman, Simon and HaysKirman et al., 2019). The alternatives to peer review (e.g., no review, judgment by one person such as the editor) have their own liabilities. Many journals invoke procedures where the identity of the authors and the reviewers is masked, i.e., names are not included on the manuscript sent to reviewers or the reviews sent to authors. The goal is to try to limit some of the human factors that can operate about responses to a person, name, or other facet and to allow reviewers to be candid in their evaluations without worrying about facing the colleague who will never speak to them again. The peer-review system is far from perfect. The imperfections and biases of peer review, the lack of agreement between reviewers of a given paper, the influence of variables (e.g., prestige value of the author’s institution, number of citations of one’s prior work within the manuscript) on decisions of reviewers, and the control that reviewers and editors exert over authors have been endlessly vigorously discussed (e.g., Reference Bailar and PattersonBailar & Patterson, 1985; Reference Benos, Bashari, Chaves, Gaggar, Kapoor, LaFrance, Mans, Mayhew, McGowan, Polter, Qadri, Sarfare, Schultz, Splittgerber, Stephenson, Tower, Walton and ZotovBenos et al., 2007; Reference CicchettiCicchetti, 1991; Reference SmithSmith, 2006; Reference Stahel and MooreStahel & Moore, 2014).

Understanding the review process can be aided by underscoring the one salient characteristic that authors, reviewers, and editors share, to wit, they are all human. This means that they (we) vary widely in skills, expertise, perspectives, sensitivities, motives, and abilities to communicate. Science is an enterprise of people and hence cannot be divorced from subjectivity and judgment. In noting subjectivity in the manuscript review and evaluation process, there is a false implication of arbitrariness and fiat. Quality research often rises to the top and opinions of quality over time are not idiosyncratic. Think of the peer-review process as the home-plate umpire in a baseball game. Any given call (e.g., strike) may be incorrect, arguable, and misguided. And any given pitcher or batter suffers unfairly as a result of that call. As reviewers (the umpires) make the call on your manuscript (rejection, you strike out), you too may have that occasional bad call. But over time, it is unlikely that all manuscripts an author submits receive a misguided call. Pitchers and batters earn their reputations by seeing how they perform over time, across many umpires, and many games. One looks for patterns to emerge, and this can be seen in the publication record of an active researcher.

6.2 You Receive the Reviews

Alas, the editorial process is completed (typically within three months after manuscript submission) and the reviews are in. You receive an email (or in olden days a printed letter) from the editor noting whether the paper is accepted for publication and if not whether it might be if suitably revised. It is possible that the letter will say the manuscript is accepted as is (no further changes) and praise you for your brilliance. The letter may comment further that the reviewers were awed by how the study was executed and how well the manuscript was written. If this occurs, it is the middle of the night and you are dreaming. Remain in this wonderfully pleasant state as long as you can. When you awake, your spouse, partner, or significant (p < .05) other reads the email and you read one of the three decisions noted previously.

If the manuscript is accepted, usually some changes are needed. These do not raise problems. More often than not, the manuscript is rejected. There are individual differences in how one reacts to this decision. Typically, one feels at least one of these: miffed, misunderstood, frustrated, or angry at the reviewers. Usually one has only the email comments and has limited avenues (e.g., scrutiny of the phrasing and language) for trying to identify who could have possibly rejected the manuscript. If a hard (printed) version of the reviews was sent to you, one can scrutinize the font style, key words, possible DNA remnants of the reviewers’ comments sheets, and molecules on the pages that might reveal pollutants associated with a specific city in the country. (I myself would never stoop to such behaviors but I have a “friend” who, over the years, was able to identify two not-so-friendly reviewers who unwittingly left clues that I – I mean my friend – was able to decipher.) To handle a rejection verdict, some authors select one of the very effective psychotherapies or medications for depression; others use coping strategies (e.g., anger management training, stress inoculation therapy) or complementary and integrative medicines (e.g., acupuncture, mineral baths, vegan enemas). (I myself use all these routinely with their order balanced in a Hyper-Graeco-Latin Square Design.)

The task is to publish one’s work. Consequently, it is useful and important to take from the reviews all one can to revise the manuscript. Maladaptive cognitions can harm the process. For example, when reading a review, the author might say, the reviewer misunderstood what I did or did not read this or that critical part. These claims may be true, but the onus is always on the author to make the study, its rationale and procedures, patently clear. A misunderstanding by a reviewer is likely to serve as a preview of the reactions of many other readers of the article. Indeed, most readers may not read with the care and scrutiny of the reviewers. If the author feels a rejected manuscript can be revised to address the key concerns, by all means write to the editor and explain this in detail and without righteous indignation and affect.

Authors often are frustrated at the reactions of reviewers. In reading the reactions of reviewers, the authors usually recognize and acknowledge the value of providing more details (e.g., further information about the participants or procedures). This is the descriptive facet of manuscript preparation I discussed previously. However, when the requests pertain to explanation and contextualization, authors are more likely to be baffled or defensive. This reaction may be reasonable because much less attention is given to these facets in graduate training and explanation and contextualization are much less straightforward. Also, reviewers’ comments and editorial decision letters may not be explicit about the need for explanation and contextualization. For example, some of the more general reactions of reviewers are often reflected in comments such as: “Nothing in the manuscript is new,” “I fail to see the importance of the study,” or “This study has already been done in a much better way by others.”Footnote 5 In fact, the characterizations may be true. Authors (e.g., me) often feel like they are victims of reviewers who wore sleep masks when they read the manuscript, did not grasp key points, and have had little exposure to, let alone mastery of, the pertinent literature. Occasionally two or more of these are true.

As often as not, it is the reviewers who might more appropriately give the victim speech. The comments I noted are great signs that the author has not made the connections among the extant literature and this study and integrated the substantive, methodological, and data-analytic features in a cohesive and thematic way. Reviewers’ comments and less than extravagant praise often reflect the extent to which the author has failed to contextualize the study to mitigate these reactions. The lesson for preparing and evaluating research reports is clear. Describing a study does not establish its contribution to the field, no matter how strongly the author feels that the study is a first.

Let us assume that the manuscript was rejected with an invitation to resubmit. As a rule, I try to incorporate as many of the reviewers’ and editor’s recommendations as possible. My view is that the reviewer may be idiosyncratic, but more likely represents a constituency that might read the article. If I can address several or all issues, clarify procedures that I thought were already perfectly clear, and elaborate a rationale or two, it is advisable to do so. Free advice from reviewers can and ought to be used to one’s advantage.

There are likely to be aspects of the reviews one cannot address. Perhaps reviewers provide conflicting recommendations, or a manuscript page limit precludes addressing or elaborating a specific point. Even more importantly, perhaps as an author one strongly disagrees with the point. Mention these in the letter to the editor that accompanies the revised manuscript. Explain what revisions were or were not made and why. If there are large revisions that alter the text (few sentences), methods or data analyses, help the editor by noting where the change can be found in the manuscript and even submit an extra copy of the manuscript in which the changes are tracked in some editing/word-processing system.

The investigator may receive a rejection letter and decide simply to submit the manuscript as is to another journal. I believe this is generally unwise. If there are fairly detailed reviews, it is to the author’s advantage to incorporate key and often not-so-key points, even if the manuscript is to go to another journal. I have often seen the same manuscript (not mine) rejected from two different journals in which there were no changes after the rejection from the first journal. The authors could have greatly improved the likelihood of publication in the second journal but were a bit stubborn about making any revisions. Even if the manuscript were to be accepted as is in the second journal, it is still likely the author missed an opportunity to make improvements after the first set of reviews was provided. In general, try to take all the recommendations and criticisms from the reviews and convert them to facets that can improve the manuscript. Obstacles to this process may stem from our natural defensive reactions as authors or a negativity bias and the occasional brutish way in which reviewers convey cogent points. (I remember being highly offended the first two or three times reviewers noted such comments, “the author [me] would not recognize a hypothesis if it fell on his lap” and “the design of this study raises very important issues, such as whether it is too late for the author [me] to consider a career change.” I have come to refer to all this as the pier-review process to underscore how often reviewers have made me want to jump off one.)

There is an additional reason to encourage taking advantage of the review process and trying to improve a manuscript we might think is perfect. For those researchers who remain in academia, one’s published studies occasionally are read as part of a promotion process. As an author we might feel relieved that a study or two was published and view that automatically as things are great. In some ways it does, but as this study is read later we still want to be sure the case was made in a compelling fashion and reviewer suggestions might help. My view is to incorporate as many recommendations, changes, and comments as possible. I begin with the view that reviewers are experts and their recommendations, concerns, and misunderstandings are facets of the manuscript it behooves me to address.

It is worthwhile and highly rewarding to publish one’s research. The process takes time and persistence. Also, contact with others through the review process can greatly improve one’s work. In my own case, reading the reviews occasionally has stimulated the next studies that I carried out. In one case, I befriended a person who had been a reviewer of my work earlier in my career. Over time and from following his work, it was very clear that he was behind an influential review although his identity had been masked. Years later over dinner, I mentioned his review in a distant past, the study it generated, and the very interesting results and, of course, expressed my gratitude. His suggestion actually led to a few studies. (His review of my manuscript was not entirely positive, which probably is the main reason I hid in the bathroom of the restaurant until he paid the check for dinner.) The lesson is more than getting one’s manuscript published. Reviews can be very educational and it is useful to let the comments sit for a while until the rage over rejection subsides.

The journal review process is not the only way to obtain input on one’s manuscript. Once in a while, I send a penultimate draft of a manuscript to experts in the field whom I do not know. I convey in a letter what I am trying to accomplish and ask if they would provide feedback. I have done this on several occasions and cannot recall any colleague who has refused to provide comments. The comments are usually detailed and quite constructive and have a different tone from those that emanate from the journal review process. The comments in turn can be used to devise the version that is submitted for publication.

5. Closing Comments

Designing and completing a study requires many skills. Publication and communication of results of research represent a separate set of skills and most of these skills are not mentioned or detailed in graduate training. I have mentioned three tasks that are involved in preparing a manuscript for journal publication: description, explanation, and contextualization of the study. The writing we are routinely taught in science focuses on description, but the other portions are central as well and determine whether a study not only appears to be important but also in fact actually is. Recommendations were made in what to address and how to incorporate description, explanation, and contextualization within the different sections of a manuscript (e.g., Introduction, Method).

It is often useful to identify a model study from one’s own reading that nicely integrates description, explanation, and contextualization. Read this paper for content and then evaluate sections and paragraphs from a higher level of abstraction. What does this paragraph accomplish in leading to the next section, what did the author do to make the case for the study, how did she keep the same story line of the Introduction, Results, and Discussion very clear, and so on? These meta-level questions can help identify a template to better operationalize points I have emphasized.

Another way to approach the task of preparing the manuscript is to consider the set of questions that ought to be addressed. Questions were provided to direct the researcher to the types of issues reviewers are likely to ask about a manuscript. I mentioned the many guidelines now that govern research. These guidelines sometimes must be followed as a matter of policy for various journals. The guidelines are useful in relation to identifying key facets of a study and a report that need to be addressed including clarity of all facets of the study, transparency of procedures, ethical issues and attention to participants, and others. All these facets are obviously important, but they are more focused on description rather than explanation and contextualization. As you prepare the manuscript, give great attention to these latter components because these areas are likely to be the Achilles heel as the manuscript is evaluated for publication.

Publication of one’s research has many rewards. Certainly salient among them are generating new knowledge. There is a canvas of ignorance that is still mostly blank and one’s research can paint one stroke. That is hugely rewarding. Added external rewards are often available as well. Fame and fortune are not likely, but one’s publication record can contribute directly to job and job promotion and the opportunity to work with students at all levels and postdoctoral researchers who join in and improve the work by their ideas. Research also helps one’s own thinking that began with conceptualization of the study and an effort to better understand the phenomenon. Writing up the results often helps to extend one’s own thinking further and hence is a critical step in the next study or in conceptualization of the topic or area. This is a reciprocal process where we too are influenced by the publications of others and hopefully exert influence with our own publications. In short, publication is not just about publication but is a gateway experience that fosters many additional fulfilling activities including participation in the larger scientific agenda and community.

14 Recommendations for Teaching Psychology

William C. Rando & Leonid Rozenblit
1. Introduction: Becoming a Teacher

Teaching is one of the legs in the academic tripod, along with research and service. As a typical academic psychologist, you will find that teaching will occupy a significant percentage of your time, despite the fact that you may never get any formal training in pedagogy. Today, with increasing numbers of universities offering training in teaching to graduate students, more people are starting their professional careers prepared to teach. Some psychology graduate students receive teaching preparation directly from their departmental mentors and take part in training organized by a campus center for teaching and learning. In either case, the process of becoming a teacher – of learning how to reach students in ways that allow them to grow and flourish – begins and ends with your commitment to develop the skills and habits of mind of an expert teacher.

Graduate students who attend rigorously to this kind of training and mentoring are fortunate; regardless of their innate success with students, they are better able to articulate the elements of their craft, they enter the job market ready to teach well, and they have a much easier time making the transition from student to academic professional. They are also more likely to be in control of the teaching process in a way that allows them to vary the style they use and the amount of time they spend on teaching, while maintaining a very high standard for themselves and their students.

Due to the vagaries of the job market, new assistant professors may find themselves at institutions with vastly different teaching cultures and expectations from where they were trained. Faculty members who never mastered the art of their own teaching are more likely to struggle with the adaptation. What is worse is that this burden can last a long time, creating a career hampered by a hostile relationship with “teaching load” – never giving their full potential to students, and never achieving the rewards of great teaching that so many senior faculty experience (Reference McKeachieMcKeatchie, 1999).

Teaching well is important throughout your career, but it is particularly important in the early years. The “publish or perish” scenario that once applied only to highly competitive research universities now applies to almost all academics, even those who chose careers in teaching-oriented, liberal arts colleges. Increasingly, new psychology PhDs are seeking short-term, adjunct faculty positions for reasons of preference or necessity. These per/course positions are typically free of pressure to publish. However, in certain markets, good adjunct faculty positions are competitive and the standards for hiring are quite high. Some schools can use active publication as a standard for hiring adjuncts, while others are not simply looking for credentialed scholars – they are looking for scholars with proof of excellent teaching. In either case, the more desirable full-time and part-time teaching opportunities are going to people who have proof of success in the classroom and in the lab. In today’s market, schools can overlook the scholar who neglects teaching or who hasn’t had time to publish because they are still trying to figure out how to teach.

For starters, you have to choose what kind of teacher you will be. As you take classes and serve as a TA in others, begin to develop your own teaching values and priorities. Look at what students are doing during class, how they approach the subject, and what they are learning. Ask yourself what you want for your students when you are a teacher. To be consistently effective in the classroom, it is helpful to know who you are as a teacher and understand the assumptions and core values you bring to the process of teaching (Reference BrookfieldBrookfield, 1995). This reflective process may seem esoteric, but be assured, it is not. One can easily design a class based on the activities of teaching and learning without giving a lot of thought to the “why.” But when unusual things start to happen in your classroom, and they always do, having a deep understanding of your basic assumptions about students, learning, authority, fairness, and purpose, etc. will give a solid foundation for action. As a graduate student, it is a good idea to reflect on the following:

  • Why do I teach psychology? What is the essential value to my students of studying psychology with me?

  • What is my role as a teacher, and what does that mean for the relationship I will have with my students?

  • What sort of behaviors do I expect of students? How do I expect to be treated? How do I expect students to treat each other?

  • What is the nature of my authority in the classroom?

The fact remains that the time you devote to teaching will have to come from some other important endeavor, like research. If the price for that trade-off seems terribly high, consider the following:

  • Evolving methods of teaching assessment are allowing schools to give greater weight to quality teaching at hiring, promotion, and tenure time.

  • Teaching, like any skill, becomes easier as you get better. Skilled teachers can achieve great results with less time and effort than unskilled teachers.

  • As a new professor, a reputation for good teaching allows you to attract graduate students and talented undergraduates to you and your work. This can create enormous professional benefits at a time when you really need them.

  • Teaching well is a joy. Teaching poorly is a burden, or worse, a drain on your time and energy.

  • The time you devote to teaching this year will pay off in time you can devote to other things next year and in the years to come.

Many of the resources on teaching, including this chapter, are sufficiently general to apply across a variety of subject areas. Nevertheless, as you explore each resource, you may wish to keep in mind some ways in which teaching psychology poses special challenges and opportunities.

  • Psychology may attract students who are unusually introspective and seek to use introspection as a source of evidence.

  • Psychology students will have strong intuitions or preconceptions about human psychology, and some of those preconceptions may be strongly held and resistant to change. A similar consideration is relevant across most social sciences, and some humanities (where, e.g., students may have strong convictions about historical narratives), but is probably of lesser importance in the natural sciences. (It would be an unusual student that had strong personal commitments to particular concepts in organic chemistry, and it’s been centuries since we’ve seen committed Aristotelians put up a fuss over the laws of motions in classical mechanics.)

  • Issues of inference from evidence (statistical and otherwise) are especially challenging because the data in much of psychology are so noisy.

The three special challenges combine to bring epistemology closer to the surface in the study of psychology than in many other disciplines. This presents tremendous opportunities to teachers to address fundamental questions about the nature of knowledge, but also imposes a burden of addressing epistemological issues in introductory courses where many students are not prepared to deal with them.

At some point in your career, you will be faced with the tasks of designing and teaching your own class. In our experience, this is a key moment in the development of a teacher. The remainder of this chapter contains tips and strategies for success in this critical moment. These suggestions point to many different styles of teaching, all of which have proven effective in their own way.

One concept unifies every idea herein, and that is this: That the goal of a great class is not to cover material or get through the book. It is to reach students in ways that will help them learn (Reference BainBain, 2004). Your goal, as you gain more teaching experience, is to figure out what learning psychology means for you and how you want your students to be different – smarter, wiser, more reflective, more skilled, more appreciative, more critical – as a result of being in class. Then, you must learn ways to help students achieve those goals (Reference Wiggins and McTigheWiggins & McTighe, 2005).

2. Five Steps to Designing a College Course in Psychology
2.1 Step One: Consider the Institution, the Curriculum, and the Students

It is helpful to take a realistic look at the academic culture of the place where you are teaching. You can build on your own observations by consulting with colleagues, and from students themselves, about habits, practices, and expectations that are most common (Reference NilsonNilson, 2010). If you are new to an institution, you may be surprised at the norms for teaching and even more surprised at how students typically approach their learning. Some norms are part of the formal institutional structure, such as the lecture/section method used at many research institutions, or the types of assignments given within a certain curriculum. Other norms arise informally, and are passed down from student to student, and from teacher to teacher.

The purpose of understanding teaching and learning norms is not to copy what has been done before, but to anticipate how your pedagogical choices will be received. If you are new to a campus, AND you are asking students to learn in a whole new way, you may want to ease them into the change and be prepared for a little resistance. It is not always easy to discern the norms and culture of an institution, and you may want to take some assertive steps to get the information you need. We suggest talking to as many colleagues and students as you can. Here are some questions you might ask:

  • How much of the reading will students do?

  • What kinds of lectures are students used to? PowerPoint? Stand-up comedy?

  • Is there precedent for students working together in groups? Do they do group projects?

  • What kinds of assignments are typical? Long, formal, end-of-term papers or shorter, more targeted essays and exercises?

  • How much of the teaching is experiential? Formal? Innovative?

Once you have a better sense of the context in which you are teaching, you can consider your goals and determine ways to achieve them.

2.2 Step Two: Think about Who Your Students Are and How You Will Include All of Them

In the previous section, we discussed the importance of designing the course around student learning, focusing on cognitive elements that allow students to understand course goals and engage in activities that facilitate learning. But there is more to learning than cognition, because (a) students are more than mere thinkers – they are individuals with their own feelings and sense of themselves, (b) instructors are more than mere providers of knowledge – we bring our own identities, biases, and practices to the process of teaching, and (c) science is more than facts and theories – the psychological literature reflects the identities, biases and limitations of the scientists and the systems that created it. Unfortunately, teaching is an opportunity to recreate and promulgate ideas and practices that, even in their desire to illuminate and explain, allow some students to experience affirmation and growth while others experience disenfranchisement and isolation.

There is a robust literature that explains the process by which schooling includes some students and excludes others, and provides guidance to instructors for making their teaching more inclusive. For example, the literature on Stereotype Threat (Reference SteeleSteele, 2011) suggests the process by which the perception of cultural stereotypes results in different levels of performance among students. Building on the work of Reference MaslowMaslow (1962), researchers in recent decades have been exploring the concept of students’ sense of belonging as a variable that might determine motivation and engagement. This research suggests strategies of creating a more inclusive classroom (Reference StrayhornStrayhorn, 2012).

This research is ongoing and expanding. Consider becoming more familiar with the literature by reading one of the articles listed below. In the meantime, the following strategies will begin to help you address your students’ sense of belonging:

  • Get to know your students and give them a chance to be known in class using ice breakers and other activities that allow students to reveal themselves.

  • Build one-on-one or small group office hours into your class early in the term.

  • Articulate clearly and regularly your desire for everyone to learn. Focus especially on classroom discussions as places of learning and respect. Set standards of respect and fairness for all discourse that takes place in your classroom.

2.3 Step Three: Focus on Student Learning to Define the Overall Purpose of the Course

As implied above, there is enormous benefit in framing or defining your class around core learning goals (Reference DiamondDiamond, 2008). Teachers experience renewed motivation in teaching when they move from a content-centered approach (i.e., getting through the material) to a learning-centered approach (i.e., helping students achieve). This renewed sense of purpose typically generates more creative and innovative approaches to classroom teaching, which, if nothing else, makes the process more interesting. In addition, teachers who successfully communicate their purpose to students may find their students have increased motivation and willingness to work and learn. In classes where the purpose seems to be defined around teacher’s lectures and interests, students are more likely to feel like and act like spectators. However, when a class and all its activities are defined around student learning, students are more likely to feel engaged and act like interested participants (Reference NilsonNilson, p. 18, 2010).

The process of defining your goals and objectives begins by asking yourself how you want your students to be different by the end of the course. This holistic approach to thinking about change in students is the first step in Backward Design of your course, a design strategy that begins with the most important aspect of your course, your goals for your students (Reference Wiggins and McTigheWiggins and McTighe, 2005). The difficulty in accomplishing this stems from the challenge of figuring out which goals are most important, and how some goals are prerequisite to other goals. In addition, teachers typically have objectives of different types. For example, there may be facts we want students to know, or skills we want them to acquire or feelings and appreciations we hope they develop. We may also have goals around how students experience us, our course, and our field, and all of these might reflect our own values and beliefs about undergraduate education. All of these goals should be part of your inventory, but then, as mentioned above, the challenge is making sense of them, giving certain goals priority, and turning them into a plan of action. One way to do this is to define a large, terminal goal – the main objective you want your students to be able to achieve by the end of the course – and then work backward, identifying as many of the subordinate skills students will need as you can. Here is an example:

Terminal Goal or Objective:

I want my students to demonstrate the ability to use data and reasoning to address a major issue in public health, education or social policy.

Subordinate Goals or Abilities:

In order to complete the Terminal Objective, my students will need to demonstrate the following:

  • The ability to distinguish among different sub-areas of psychology.

  • The ability to critically evaluate social science findings reported in journals or the popular press.

  • The ability to write at least three paragraphs that demonstrate the distinction between observation and inference.

  • The ability to articulate the power and complexity of experimental design.

  • The ability to identify 10 threats to validity in a well-respected journal article.

  • The ability to apply the scientific method to questions about human behavior, and be able to identify misapplications of the method.

Notice that in the example above, the emphasis in on the demonstration of well-defined abilities. The common trap of many teachers is their tendency to define their goals in terms of what students will “understand” without defining the depth and breadth of the understanding, nor the way that the understanding will be demonstrated. The complexity of the learning process, and the fact that all of our students start out with different skills, prior knowledge, and approaches to learning, make teaching difficult to begin with. However, the more clearly you can define the skills and abilities that students should acquire during a semester, as well as how each and every activity and assignment furthers those goals, the more likely your students are to actually learn something (Reference Mayer, Paul, Wittrock, Anderson and KrathwohlMayer et al., 2001). Notice also that, in the example, the Terminal Goal has a real-world application. It is the kind of task that might motivate students because it addresses a practice with which they are familiar. The Subordinate Goals are less real-world, more academic, but if you organize the course around the more engaging applications of the Terminal Goal, students will be more motivated to develop subordinate stills.

To highlight the process of thinking about institutions, curriculum, and students, it is helpful to contrast the differences between introductory and advanced courses and to illustrate how one might design an engaging and successful class for each.

In psychology, as in other disciplines, there are predictable differences between the approach to introductory and advanced courses in the field. Introductory course enrollments are typically large and therefore courses are taught in a lecture format. The aim of many introductory courses is to expose students to a breadth of content and to introduce students to a large set of basic concepts and foundational facts, and to test their abilities to comprehend them. Assessment often involves some kind of objective tests with a combination of multiple choice and short answer items. This may not be the best way to introduce students to the field of psychology, but as it is a common practice, it is a good place to start.

Recently, instructors and scholars have noticed some failings in this mode of introductory psychology course. First, the pedagogical process is not very interesting or engaging, making it a poor way to attract students to the field, which is an important yet often unrecognized departmental goal for any introductory course. Second, this mode of pedagogy gives students little introduction to the work that psychologists actually do. A student who is wondering what it’s like to be a researcher or a psychological practitioner will get no feel for what that work is actually like. Finally, the process of listening to lectures and taking tests is not very engaging for students. Many students in an introductory class will be freshmen who are new to college. These students would benefit from learning to apply theories and critique ideas. They would also benefit from practicing these skills with other students with whom they could make connections. It’s true that a fiery, dynamic lecturer can generate interest by sheer force of personality and entertainment value, but a conscientious designer of introductory courses would want to build a course that benefits the institution, the department, and the students. Consider the following approaches:

  • Build the course around some big questions or themes that have relevance for students.

  • Learn to use small-group or paired-work exercises in your large lecture course. Break up the lecture and get students working on interesting questions together. This is especially crucial if your class meets for more than 50 minutes at a shot.

  • If you have to use objective tests as your primary mode of assessment, try to create on assignment during the semester that allows students to explore their own interests. Even in the big class, find a way to see or acknowledge every student.

  • If you are going to lecture, lecture really well. If you don’t know how well you lecture, have a colleague or consultant observe you. Once you’ve mastered the art of delivery, design lectures around the most interesting feature of any chapter. Tell stories. Find a way to demonstrate a concept or give students a chance to experience it.

  • Use end-of-class assessment activities and ungraded writing, e.g., one-minute paper assignments, to help students realize what they have learned in every class.

Advanced courses, in contrast, are more likely to be well designed for the interests of students and the institution. They are typically small, sometimes very small. The aim is to explore one area of psychology in detail through reading, lecture, discussion, and sometimes active experimentation. Students are often asked to design experiments or engage in psychological practices, and write longer, journal-length papers – in other words, to start doing some of the real activities that psychologists do. Another goal may be to test for advanced analytical abilities and to assess students’ capacity to integrate and synthesize theories and methodological approaches. Many advanced or capstone courses may also be designed to socialize students into the values and norms of the field.

2.4 Step Four: Develop a Course Plan that Pulls Everything Together

Every course has multiple elements: purposes and goals, motivation and incentives, content, activities, and assessments and grading are just a sampling. The end product of a course plan is a course syllabus in which you and your students should be able to identify all of these elements. As a course designer, you should consider how each element of the course fits together or aligns, and how subsequent elements of the course build on previous elements (Reference WulffWulff, 2005). The degree to which your pedagogical plan is transparent to students is an issue we will discuss in a later section. For now, consider using the questions below to develop your plan for how you will teach each aspect of the course.

  • What is the purpose of this section or chapter (develop a skill, practice a technique, master an area of knowledge)?

  • How will I motivate students to learn this? Is the material or skill innately interesting or valuable? What does it teach students to do? Will students benefit from a demonstration or model? Will I use a graded test or assignment to increase motivation?

  • What new information will students need (theories, studies, examples, etc.) and how will they get it (lecture, reading, video, observation, etc.)?

  • What action will students perform on that new information (write about it, discuss it with peers, experiment on it, reflect on it, etc.)?

  • How will I assess what students are learning (graded paper, ungraded written assignment, observation)?

2.5 Step Five: Write a Course Syllabus that Establishes a Contract between You and Your Students

The final step in designing a course is the presentation of the syllabus. The syllabus accomplishes one essential goal: it supplies students with all the information they need in order to understand and complete the course in a way that helps them set their expectations and guide their behavior. It can be useful to think of the syllabus as an informal contract between you and your students.

There are many styles of syllabi. Some faculty members choose to put everything in writing, including the purpose of the course and the rationale for its design, while others include just bare-bones logistical information about due dates, grade requirements, and texts. To help you decide how much detail to include, think about what is important to you and to your students. Also, use the syllabus as a reference or teaching tool throughout the semester. Tone and style are both personal choices; however, be aware that the tone of the syllabus does communicate something to your students. You’ve seen hundreds of syllabi in your lifetime, so we don’t have to describe one. Still, you might want to consider these suggestions for writing a good one.

  • Do not take for granted that your students know more about your institution than you do. Remember, many students in any classroom are just as new as you are. Avoid abbreviations and lingo. Remember, first-year students and part-time students may not be familiar with nicknames and other local jargon. Stick to the facts and include as many as you can.

  • Highlight the most important ideas or processes of the course. Don’t be afraid to include some big ideas in the syllabus, especially if they provide a context or purpose for the course.

  • Build your weekly calendar around questions rather than topics.

  • Edit carefully the calendar information you include. Be aware of holidays and other campus activities. Remember, students will use this syllabus to plan their semester.

  • Make the document as useful as possible, so that students will keep it and look to it often. Whether on paper or on the web, the syllabus should be a useful document that you and your students refer to because it has good, reliable information.

  • Build the weekly items in your syllabus around questions to be answered rather than topics to be covered.

3. Some Practical Considerations in Creating a Course

Once a faculty member establishes goals and rationales for a course, the pragmatic steps and choices become much easier. In the alternative, the structure of the text or the vagaries of semester calendar end up driving the purpose of the class, which is not ideal. In this section, we discuss four choices that every faculty member has to make: textbooks and readings; use of class time; assignments and other out-of-class work; and grading.

3.1 Choose a Textbook, Readings, and Resources that Help You Teach

The trick here is to resist the temptation to let the tail wag the dog by allowing the content and structure of the text to structure the goals of your course. Few faculty members ever find the perfect textbook until they break down and author their own, and even then, there always seems to be something out of place, missing, or overemphasized. Most instructors use supplemental readings and digital resources to add emphasis and to provide students with more varied ways of learning material.

Some easy ways to find a respectable pool of good textbooks to consider are: ask your colleagues to make recommendations (they will know the level and type of textbook students at your institution are used to, and they may even have direct experience teaching with that book); the core collection or reserve room of the library will have copies of all the textbooks currently in use; write to publishers for review copies of books you have heard of or seen advertised; consult an online resource, e.g., A Compendium of Introductory Psychology Texts at http://teachpsych.org/otrp/resources.

Your first consideration is the content. You cannot teach effectively from a book that you neither respect nor agree with, unless you design the entire course around debunking the text, which many students find confusing. This does not mean that you have to agree with everything the authors present. Allowing students to see you display a little healthy disagreement with authority of the text is probably good for most students, but it should not be a daily ritual. Find a book that provides intelligent and scholarly treatment of most topics, and that does so in language that you and your students can understand and appreciate. If the book organizes material in a way that advances your understanding of things, then you have an additional advantage. Quality of content is the most important consideration, but textbooks contain many other qualities that can help you teach more effectively. Some of these include: illustrative examples that explain concepts in various ways; exercises and activities that you can use during class or as out-of-class assignments; side bars and special inserts that discuss related topics like teaching students about the field, real-life or policy applications, personal biographies of researchers or historically important research. Most books these days also include student study aids, such as review questions or self-test. A textbook today may include an online supplement that can help you develop lectures or add vibrant visual material to the class. Where cost is an important consideration, consider using one of the increasing number of “open-source” or royalty-free textbooks available in electronic form. Textbooks come with myriad bells and whistles, not all of which will be helpful. Remember, the book is just a tool to help you teach better. It is not the entire class, nor is it a script that you have to follow. On the other hand, students are used to focusing on “the book” and looking to it for answers and guidance, so you are smart to get one that really complements your approach.

3.2 Be Smart and Creative in Your Use of Class Time

There are many things you can do with class time other than lecture. Class time is your most valuable teaching commodity, but to make the most of it, we need to design it in the context of what students will do in other settings such as read, do homework, or work with other students. With this in mind, class time is probably not the best time for students to encounter new material for the first time. Research in science education, for example, suggests that class time is a good opportunity to let students work together, and for you to observe students at work and give them timely appropriate feedback (Reference Deslauriers, Schelew and WiemanDeslauriers et al., 2011). If you are a stimulating lecturer who can motivate, stimulate, and inspire students to greater heights of academic achievement, then some amount of lecture will likely serve you and your students well. But uninspired lectures that simply cover material, particularly material that can be learned by reading or watching a video, are a poor use of valuable time with students. If you need to introduce or review material, do it quickly – within 15 or 20 minutes. Use the remainder of the time to:

  • organize small-group tasks that allow students to engage or question material; or

  • all-class discussions about interesting controversial topics. These can be organized as debates, or extended role-play exercises that ask students to take the perspective of a point of view or theoretical orientation; or

  • demonstrations with discussion and analysis.

The design of class time is even more important if your class is longer than 50 minutes or only meets once a week. In these cases, it is important to break the class into clear segments with clear goals.

Class time with students is a valuable resource that you must steward with advanced planning. Design your class so that students arrive with questions or insights they have gleaned on their own. Use class time to give students a chance to practice analyzing theories, applying concepts, building models, or simply answering questions. Some good examples appear below.

3.3 Design Assignments that Allow Students to Make Better Use of Class Time

Students spend more time completing assignments than any other aspect of school, so it is vital that assignments require students to do significant, targeted, academic work. Students’ performance on assignments can be improved by connecting some aspect of the assignment to the work students will do in class the next day. For example, if students are writing reports on research articles, have them use some aspect of those reports to do an in-class analysis. Motivation can be further increased by setting up in-class peer groups that require individuals to come to class prepared. As you begin to develop your first assignments, look back on some of the assignments you were given, and ask colleagues for their ideas. Consider the assignments that are typically used in psychology classes – the research report, the case analysis, compare and contrast, journal article review, lab report – because these are forms that may be familiar to students. Then focus on the specific goals and objectives you have created for that section of that course, and modify the assignment in the following ways.

  • Consider designing a series of assignments that build students’ skills over the course of term.

  • Align what students are doing in their assignments with what they are doing in class. Students can practice modes of analysis in class that they can apply to assignments. Conversely, students build expertise in assignments that they use to participate in class. Make these connections clear to students.

  • Match the length, difficulty, and scope of the task to the skills you want students to demonstrate. Shorter, focused assignments typically offer more stimulating educational experiences than longer, more complex works. At some point, it may become necessary for undergraduates to demonstrate their ability to sustain an analysis or project for over 40 pages, but such work is often done as an undergraduate thesis or capstone project.

  • Communicate the purpose of the assignment in clear terms of high academic standards. Focus on what students are accomplishing for themselves.

  • Effective assignments are clearly defined and have well-established standards. It does no good to wait until you grade a paper to tell students what you were looking for.

  • Effective assignments are no larger than the skills they are designed to teach. Don’t ask students to produce huge products and long papers to demonstrate small skills over and over again.

  • Effective assignments produce real products, with form and structure. Rather than asking students to write a 5-page paper on X, ask them to write up a case analysis or grant proposal, mock legal brief, committee report, letter to the editor, or a publishable book review.

  • Effective assignments may make use of imagination and perspective, ask students to take on a role and write from that perspective, e.g., take the role of a patient, or speculate on a hypothetical situation.

  • Effective assignments combine the demonstration of well-defined skills and abilities with opportunities for creativity, uniqueness and personal expression.

  • Effective assignments ask students to demonstrate skills that are directly related to the core goals of the course. That is, students should have to rely on what they learned in class to successfully complete an assignment.

  • Effective assignments often include students working in pairs or teams, although students should be individually accountable for their own work and their own grades.

3.4 Use Assessment and Grading to Review Students’ Work and Give them Necessary Feedback

Grading students’ work effectively is a critically important part of teaching, and not easily done. First of all, let’s define our practice. Assessment is the practice of critically reviewing students’ performance. It can be formal or informal. It can result in constructive feedback, or simply a shift in our perception. Assessments, such as ungraded quizzes or “clicker” questions, can also be used to help students assess their own understanding. Assessments are powerful teaching tools. They keep teachers and students connected to learning and they provide both with valuable guidelines for how to succeed.

Sometimes, as with formal tests, quizzes, and papers, our assessments result in grades. Grades are fraught because they are typically associated with formal, institutional records. In other words, they have lasting consequences. All assessments should be accurate, fair, constructive, and timely, but grades need to be especially so. Like it not, they are a big part of what motivates students to work. Because of that, the achievement of grades should be based on your central values and objectives for whatever course you teach. Here are some suggestions.

  • Grading begins with your very first thoughts about the course. Once you identify the skills and abilities you want students to demonstrate, you must assign value to the achievement of those skills and to the partial achievement of those skills, and then translate that value into what every grading scheme your institution requires. Listed below are a set of considerations that you can apply to every assignment or test you grade, as well as to the overall grade.

  • Establish and communicate specific standards for everything you grade. Inform students upfront what will be graded and how. Reaffirm those standards in the comments that accompany your grades. Remember, the primary purpose of grades is to give students useful feedback about their progress.

  • Begin grading short assignments and in-class work early in the term. This will help students become familiar with your standards and their level of preparation.

  • Establish ground rules to achieve fairness in grading. Inconsistency in rules and procedures will communicate favoritism and capriciousness. It is not necessary to establish rigid practices to achieve a sense of fairness; however, your rules must apply to all your students and in the same way.

  • Grade a variety of student work. Make sure your grading structure reflects all of the objectives you have identified for the course. Naturally, you will want to give greater weight to the core objectives. However, you can keep students working and learning at a steady pace throughout the term if your grading scheme gives them continuous feedback about how well they are doing along the way.

  • The grading of participation in class should, like all other grades, include clearly defined standards.

Remember that, under the best of circumstances, grading is difficult. Grading brings to the forefront a fundamental conflict inherent in our work as teachers. We are helpful guides, mentors, and coaches who work compassionately and tirelessly to help students master a new terrain; but we are also gatekeepers, charged with setting and enforcing standards for participating in a profession (Reference ElbowElbow, 1986). The tension between those two roles is enough to give all of us a knot in our gut when faced with a difficult grading task. The best way to mediate this conflict, fortunately, is relatively straightforward: set out clear standards that students must meet at the outset; then enjoy your role as helpful guide.

4. Teaching Psychology in an Age of Remote Instruction

The recent COVID epidemic has changed the landscape of teaching and learning. At the time of this writing, we are still in the midst of the pandemic, and remote teaching and learning are the norm. Once considered an option, technological teaching tools like Learning Management Systems, Zoom, Panopto, asynchronous learning, breakout rooms, and testing software, etc. are currently essential. In all likelihood, the technological tools of remote instruction will continue to be widely used post COVID. Some schools will increase their remote learning options and instructors who are once again in the classroom will enhance aspects of their teaching using the tools they learned out of necessity.

In this section, we provide some observations about our new, technology-enhanced learning environment and suggest some steps for successful teaching in the years ahead.

  • Just as instructors used to articulate their facility with teaching large lecture courses or small seminars, instructors in the days ahead will, at the very least, discuss teaching with Zoom (or some similar platform), and using a Learning Management System such as Canvas. This is the new minimum.

  • The pedagogical conversation among instructors has shifted dramatically toward what experts would call, “student-centered learning.” The move to remote learning has focused attention and conversation to key issues in learning, including: engaging students, sustaining motivation, building community, inclusivity and equity, and thinking about students’ learning environments.

  • Attention to students’ access is higher, and instructors are thinking carefully about synchronous and asynchronous modes of learning.

The shift to remote teaching and learning has made online instruction ubiquitous. However, not all instructors were successful in their remote teaching efforts, which is fully understandable. Technology, online or otherwise, is like any other tool for enhancing student achievement (Reference Manning and JohnsonManning & Johnson, 2011) – it is only as powerful as the thoughtfulness of the person using it. Consider the following as you engage in remote teaching or any other technology enhanced pedagogy.

  1. 1. Define your goals. In student-centered terms: what changes (learning or abilities) do I want to see in my students? What teaching/learning problem am I trying to solve with application of a technology?

  2. 2. Consider what tools are easily available (e.g., Zoom, and LMS, email, web, newsgroups, chat, multimedia software, discussion boards, etc.). What are the institutional resources you can draw upon? What tools are other psychology instructors using?

  3. 3. Define a strategy for integrating technology into the core learning of the class. Think about incentive structures to motivate student engagement within and across platforms and assignments.

  4. 4. Have a back-up plan. Issues of access and functionality can quickly undermine instructional plans that rely on technology. A great example is the use of classroom discussions on Zoom. What if time zones or bandwidth limit student access? What are the asynchronous, limited-technology options for learning and completing assignments?

  5. 5. Assess how well your strategy has met your goals. Was the effort worth it? Did using this technology increase student learning or motivation?

When you consider the framework above, the answer to the question “When should I use technology in my teaching?” becomes straightforward: whenever it helps you achieve a clear pedagogical goal in a cost-effective way.

4.1 How Do I Get Started Using Instructional Technology in my Teaching?

Psychology researchers are often proud of being Jills-of-all trades. Many of our research projects call on a broad spectrum of skills, and we may often have to switch hats from manager, to programmer, to carpenter in a space of an afternoon. It’s tempting to bring some of those skills to bear on developing technological solutions to teaching problems. However, the costs of developing teaching technologies from scratch are often prohibitive (in terms of your time). Activities like website design may be fun for some of us, but they compete for scarce time with syllabus design, lesson planning, and student contact. Keeping it simple should be a paramount consideration for implementing any technological innovation.

At most universities, the shift to remote teaching has necessitated a vast expansion of technology services to instructors. It is likely that your psychology department will have its own staff of technologists and instructional designers who can assist you as needed. Get to know these people. In the meantime, consider that modern information technology has enabled some discipline-agnostic approaches for engaging students and facilitating reaching learning goals. For example:

  • Novel approaches and tools for collaborative knowledge construction, like shared virtual whiteboards, allow small groups to engage in real-time interactions around a shared artifact to create collaborative mind-maps and other diagrammatic or narrative representations. Two widely available examples of such tools are gSlides and LucidChart.

  • The availability of volunteer-driven knowledge creation and curation communities provides students an unprecedented chance to engage with the content as contributors. Consider, for example, class assignments that ask student teams to create or improve Wikipedia articles, create and manage new Reddit threads, or ask and answer Quora questions.

Both the discipline-specific and the discipline-agnostic tools have evolved at a phenomenal rate in the last decade, and make today an especially exciting time to use technology to accelerate the teaching of psychology.

5. Managing and Mentoring Teaching Assistants

In many ways, the Teaching Assistant (or Teaching Fellow or Graduate Student Instructor) is a strange creature whose role is rarely well defined. The TA walks the shadow world between colleague, student, and servant, as all apprentices must. It’s the supervising professor who determines, often implicitly, which role a TA will play. The TA experience is likely to feel servile when their roles are unclear, their tasks menial, or when the TAs do not participate in setting goals of courses and sections they help teach. For example, TAs commonly feel least satisfied when they grade exams they’ve had no part in creating and papers they’ve had no part in assigning. On the other hand, a great relationship between faculty member and TA can be a graduate student’s most rewarding experience. Supervising faculty can, and often do, have a profound impact on the lives and careers of their students by introducing them to teaching and the life of an academic.

Just as you may not have received training in undergraduate teaching, you almost certainly had no training in management or mentorship. Here are some strategies to help you become a better manager and mentor for your TAs.

  • Meet with the TAs prior to the beginning of class. Explain your pedagogical goals and ask for their input. If the TAs will teach sections, ask them to articulate, preferably in writing, what their section will do for the students. Engage them in a conversation about their overall goals, as well as their emerging understandings about teaching and learning.

  • If possible, involve your TAs in planning the course, the lessons, and the assignments. This will not only help you come up with better material, but will also be an invaluable learning experience for the future faculty members under your wing. The more invested each TA feels in the course the more rewarding the work will be. For example, you can have each TA give a guest lecture, then generate exam questions about the guest lecture, and grade the specific questions they’ve generated.

  • Give your TAs more autonomy to run their section as they see fit. Once you have agreed on what the goals of the section are, let the TA experiment with means.

  • Clarify expectations at the outset. What will the TAs do? What will they be trying to accomplish? How will they be evaluated?

  • Give your TAs the support they need to function effectively. Usually this means meeting early and often, especially in the very beginning of the course. It also means keeping track of your end of the course paperwork, and clearly delegating various assignments to different TAs. For example, who will be responsible for compiling all the section grades at the end of the course?

  • Offer to observe your TA’s section to help them become better teachers.

  • If your institution offers TA training and development, require your TAs to avail themselves of the training before teaching your course. Make it your business to let your TAs know about the resources available to them.

  • If your TAs are responsible for grading, effectively delegate to them that authority. One of the most frequent and bitter complaints heard from TAs is that course instructors summarily overrule their grading decisions without consultation. If you feel you have a question about a grading decision, meet with the TA about it. The TA will often have directly relevant information about the grade and the student in question. Remember, TAs probably know more about the students in their sections than you do.

  • Once you have effectively delegated authority to your TAs, you must also hold them accountable for whatever tasks you have assigned. Lack of accountability leads to complaints from students. Some of those complaints will come to you, but others will go straight to your chair or dean. Rest assured, if these complaints become vociferous or numerous, you will be hear about it. Save yourself the headaches; have clear, fair standards and stick to them. Consider that effective delegation implies you have given the members of your team the freedom to fail, as well as to succeed.

Although successful delegation is difficult, the rewards are large. By investing energy in effective delegation, you will save time in the long run, develop better mentoring relationships with your TAs, and have a better class.

6. Conclusion

We hope this short introduction to teaching will help you navigate the uncertain waterways towards the land of confident and competent teaching. The first years are important, but don’t be discouraged if they don’t go well. Keep trying new things and asking for help. We’ve seen great teachers emerge after years of average performance. As a faculty member, teaching will be a big part of your life, so it’s important to figure out how to do it well and also how to enjoy it. We’ve only scratched the surface in this chapter. Here are some additional resources:

  • Your Campus Teaching Center. Chances are your campus has a teaching center with consultants who can help you define goals, think of strategies for meeting those goals, and observe your teaching. There, you are also likely to find a library of books on teaching, and access to a network of people on campus who can give you advice.

  • The American Psychological Association, www.apa.org/. Type “teaching” into the search box for the latest articles on teaching in psychology.

  • APS Resources for Teachers of Psychology, www.psychologicalscience.org/index.php/members/teaching

  • The Society for the Teaching of Psychology, http://teachpsych.org/

15 Applying for NIH Grants

Carl W. Lejuez , Elizabeth K. Reynolds , Will M. Aklin , & B. Christopher Frueh
1. Introduction

One of the most important and daunting roles of the early academic is the pursuit of National Institutes of Health (NIH) grant funding. Although NIH funding allows for great autonomy and comes with validation and prestige, the process can feel overwhelming even for the most seasoned investigators. Therefore, being armed with information is crucial.

Most importantly, it is vital to keep in mind that applying for NIH funding is much more of a marathon than a sprint. Only, it is a marathon where there is no planned route, where you often realize you have been going in the wrong direction and have to double-back with few signs to assure you. In addition, it is a run in which everyone else also struggles at one time or another, but most are much more eager to talk of their success than their struggles. You will be questioned and second-guessed at every step by those evaluating your performance as well as your supporters, and you will be guaranteed to feel like you are stumbling across the finish line no matter how confident you were at the start.

With those caveats in place, it is a marathon with some tangible positives for those who are successful, including resources to do your research in the best way possible with an opportunity to build a research team of pre- and post-doctoral trainees and support staff, as well as better visibility in the research community and a big boost in the promotion and tenure process. Moreover, these scientific benefits also often come with financial support which may serve as the basis for your salary in an academic medical setting or allow you more time to devote to research through course buy-outs or summer salary support in a Psychology Department. Clearly, the pursuit of an NIH grant is a high-risk/high-reward venture that should not be entered into lightly, but also should be an option for anyone who is willing.

Aiming to provide a guide to NIH grants with the early stage investigator in mind, this chapter outlines many of the key issues you will tackle throughout the process. These include: (a) Developing Your Idea; (b) Finding the Right Mechanism for You and Your Idea; (c) Preparing Your Application; d) Submission and Receipt of Your Application; (e) The Review Process; and (f) Post-Review Strategies. We will address these issues in light of the recent changes in the NIH grant submission and review process to provide an objective source, complemented by our favorite tips for your consideration.

2. Developing Your Idea

A lot must go into moving from the first spark of an idea to the completion of a fully formed grant. A viable grant should begin with an idea that is well suited to your background and focused on a topic you know well. It is important to select research questions that will allow you to maximize your professional development and provide a chance to make your own “mark” on the field. Therefore, it is critical to consider how you can strategically develop your research to be programmatic in nature so that it will be sustaining and long-lasting, making numerous cumulative contributions to the field. While it is imperative to select a topic that fits with your expertise and interests, a successful NIH grant also must have clear public health relevance, a place within the scientific literature in that field, and potential to significantly advance the existing knowledge base.

Based on the review criteria we will discuss in detail later, key questions to consider when generating ideas include: How will this study be significant, exciting, or new? Is there a compelling rationale? Is there potential for high impact? How will aims be focused, clear, feasible, and not overly ambitious? How will the study clearly link to future directions? Have I demonstrated expertise or publications in line with the approach? Do I have collaborators who offer expertise to the proposed research? Do I have the necessary institutional support?

Once you get a bit further along in developing your idea, it can be helpful to talk to NIH staff, particularly staff who have a portfolio that includes similar types of grants. One way to see funded grants to ensure your research idea is reasonable (and also not already being done!) is NIH REPORTER (http://projectreporter.nih.gov/reporter.cfm). This electronic database provides information on NIH-funded research including titles, principal investigators, and abstracts. REPORTER is a means to get a snapshot of one’s field including possible collaborators and competitors.

3. Finding the Right Mechanism for You and Your Idea

A critical component of the idea development process is selecting the right grant mechanism. Similar to getting advice on your grant idea as noted above, you should consider checking with a program official from the institute you are targeting with your application to assess fit between your idea and programmatic priorities, your career trajectory and goals, and a particular mechanism. As an early career psychologist, the choice will likely be between a career development award (“K award”) or an investigator-initiated research award (“R grant”).

In the following sections we provide a detailed description of the K award and the R01 grant including a direct comparison of the two. Although we will not discuss it here, you should also be aware that NIH also offers post-doctoral fellowship awards called F32’s that may be a useful option to consider (see https://grants.nih.gov/grants/guide/pa-files/PA-20-242.html for details). Moreover, there are some exciting newer mechanisms that provide high levels of flexible funding opportunities for unusually creative early stage investigators. You should consider these especially if your work is particularly interdisciplinary and/or novel in ways that do not fit well into existing funding niches. We will not address these here but more information can be found at the NIH Office of Extramural Research (https://researchtraining.nih.gov/).

3.1 K Awards

There are a number of types of K awards (for more details see: https://researchtraining.nih.gov/programs/career-development). The most relevant for early career psychologists are the K01 (Mentored Research Scientist Development Award for career development in a new area of research or for a minority candidate), K08 (Mentored Clinical Scientist Development Award for development of the independent clinical research scientist), and K23 (Mentored Patient-Oriented Research Career Development Award for development of the independent research scientist in the clinical arena). There also are mid-career and even later career development awards that provide resources for investigators to develop new areas of expertise – and provide mentorship to junior investigators. The K award usually requires that at least 75 percent of your effort (9 calendar months in NIH terms) be devoted to the research project and to career development for 3–5 years. These awards are evaluated as training mechanisms. Applications require not only a research plan but also a training plan for career development activities under the guidance of a research mentor, local collaborators, and external consultants. The university must usually agree to release the PI from most teaching, clinical, and administrative duties. In return, NIH will pay the PI’s salary, up to certain limits. There is a great deal of variation among the different NIH institutes as to which Career Awards are available, what PI qualifications they expect, the dollar limits for salary and research expenses that they will award, their application deadlines, and their supplemental proposal instructions. It is best to contact the relevant institute prior to preparing your proposal to be sure you understand that institute’s guidelines for a K award.

3.2 R Grants

The R grants most relevant to the early academic include the R03, R21, R34, and R01 (for more details see: http://grants.nih.gov/grants/funding/funding_program.htm#RSeries). The R03, small grant program, provides limited funding for a short period of time. Funding is available for two years with a budget up to $50,000 per year. Some institutes (e.g., NIDA) also offer rapid transition awards called a B/Start (i.e., Behavioral Science Track Award for Rapid Transition), which consists of one year of funding for $75,000. Because reviewers submit reviews without a full review meeting, this mechanism often includes a shorter lag time to completion of the review process (i.e., funding occurs within approximately 6 months of the date of receipt of the application).

The R21 is considered to be an exploratory/developmental research grant used to support the early stages of project development (e.g., pilot or feasibility studies). Funding is available for two years and the budget cannot exceed $275,000. Extensive preliminary data are not expected, but applications must make clear that the proposed research is sound and that the investigators and available resources are appropriate to the task. While the R21 mechanism can sometimes be considered to be most relevant for previously funded senior investigators undertaking high-risk/high-reward research and/or a new area of research, it is our experience that first-time investigators can be successful seeking R21s if their idea is novel and has potential for transformative impact in their field of study.

The R34 is a clinical trial planning grant intended to support development of a clinical trial. This program may support: establishment of the research team, development of tools for data management, development of a trial design, finalization of the protocol, and preparation of a manual. For example, NIDA offers this mechanism exclusively for treatment development and some initial testing. The R34 lasts for 3 years with a budget of $450,000, with no more than $225,000 in direct costs allowed in any single year.

The R01 is NIH’s most commonly used grant program which is generally awarded for 3–5 years. There is no specific budget limit, but budgets under a particular amount can be submitted with less detail than more expensive R01s (called modular and typically $250,000 direct costs each year). Budgets over a particular amount (typically $500,000 direct costs each year) must obtain institute approval before being submitted. Although you should request the budget you need to conduct your project, an extremely large scope and budget in an application from a new investigator may raise red flags for reviewers. Interestingly, it is our experience that some early career investigators avoid the R01 mechanism because of their junior status; however, as outlined below, NIH has taken some steps to encourage early career investigators with a “big” idea and adequate pilot data to consider an R01.

3.3 K/R Hybrids

Of note, there is an additional mechanism that serves as a bridge between a K award and an R grant called a Pathway to Independence Award (K99/R00, nicknamed kangaroo). This mechanism provides up to five years of support consisting of two phases. The first phase provides 1–2 years of mentored support as a postdoctoral fellow. The second phase is up to 3 years of independent support (contingent on securing an independent research position). Recipients are expected to compete for independent R01 support during the second phase to allow for continued funding once the K99/R00 support has ended. Eligible principal investigators must have no more than 5 years of postdoctoral research training.

3.4 Advantages and Disadvantages of K Awards and R Grants

K awards and R grant mechanisms each have a number of advantages and disadvantages. A K award can provide 50–100 percent of your salary (depending on the type of K and branch of NIH) for up to 5 years. This allows for a more highly stable period of funding than the typical R01, which usually funds only 20–40 percent of the principal investigator’s (PI) salary for a period of 3–5 years. This allows investigators to concentrate on their specified research efforts without the concerns or distractions of needing to constantly be pursuing additional sources of support or fulfilling extensive clinical or teaching responsibilities at their university. Other advantages of the K award are the opportunities for mentorship, training, and thoughtful development of a programmatic line of research in the PI’s chosen area. The K will provide funding (typically $50,000 in addition to salary support) specifically to support these critical opportunities, which include: time and funds for focused coursework, study materials, access to consultants and mentors – and funds to travel to meet with off-site mentors at their research labs or attend professional conferences. These resources are paired with a highly personalized training plan that is developed as a part of the grant application. Because career development and training is a central aspect of K awards, the expectation of research is different and more modest than that for an R grant that will have a much more highly specified research project (and no training component).

For all of those reasons, the K award is very well suited for the needs of junior investigators who may have only limited pilot data of their own and require additional training experiences before attempting the larger-scale R grant projects. Also, the fact that a K award covers most if not all of one’s salary can be very helpful in environments that require a large percentage or even all of one’s salary to be covered on grants, which often include most medical school positions. It is unusual for more than 33 percent of one’s salary to be covered by an R01, even though the latter is a larger grant because it is less focused on training/support of a junior investigator and more focused on supporting the research project. More recently, even psychology departments and other similar environments that are more associated with hard salary funding have begun creating positions or providing greater flexibility in existing positions for those with a K Award, expanding the scope of environments in which they have great value. Nevertheless, the K award is not necessarily the best mechanism for some junior investigators. Some are discouraged by the prospect of an ongoing role as “trainee.” Others are deterred by the lack of flexibility in the mechanism itself. For example, a K can be transferred to other institutions, but it can take some time and specific evidence that the new environment can support the research and that relevant and willing mentors are on-site. They also do not provide sufficient funding to implement large-scale research projects (e.g., a randomized clinical trial). Moreover, they require significant institutional support documented within the application that is not always proffered or feasible for budgetary reasons or instructional needs. Mentors on Ks do not receive financial support from the grant, which can create challenges getting engagement and sufficient time devotion. K awards also pay a vastly lower indirect cost rate (8 percent) than R grants (typically in the 50–65 percent range). Indirect costs are funds provided to the applicant’s institution to cover the costs of administering and supporting the applicant’s research. This amount is above and beyond the funds provided to the applicant for the research (called direct costs), but is calculated as a percentage of the direct costs. Although this should not lead you to apply for an R grant over a K award if the latter is a better choice for you and your research, you should be aware that the disparity in indirect costs of a K award may leave junior faculty investigators at a disadvantage in terms of obtaining additional institutional support once the application is funded and the research begins. Finally, there has been considerable chatter in recent years that while a K award can really jump-start a career for some, they might counterintuitively slow progress to an R grant for others. Because K awards cover so much of one’s salary it is actually challenging to show one has the available effort to devote to other projects, particularly as the PI, and there is always some confusion about when someone with a K is “allowed” to start submitting R-level grants. As such, K awardees may be slower to write their first R than those without a K award. We should be clear, this is not an argument against a K award per se, as they are indeed great for one’s career, but this unexpected potential impediment to future development of an independently funded research portfolio is important to be aware of for those who pursue a K. These timing concerns need to be balanced with preliminary evidence that researchers who have received a K award compared to those who did not have great success getting their first NIH award (https://nexus.od.nih.gov/all/2019/04/02/association-between-receiving-an-individual-mentored-career-development-k-award-and-subsequent-research-support/).

A major advantage of the conventional R01 award (and to a lesser extent other R grants) is the significantly larger project budgets, dictated by the specific requirements of the scientific protocol. However, new investigators applying for any R grant must be prepared to demonstrate to the review committee that they have the appropriate background, expertise, and skills to implement and complete an independent research project. There are a number of ways to successfully demonstrate these qualities. They include the availability of relevant scientific pilot data, a “track record” of publications in your area of research, and a thorough, well-conceived, and convincingly argued research plan (i.e., scientific protocol). Applications for R funding are evaluated almost exclusively on their scientific merit, significance, and innovation. R01 grants are quite competitive, but there is a tangible advantage in the evaluation process if you are a new investigator defined as not previously or currently holding R01 support (previous R01 submissions do not affect this status until one is funded). Specifically, in many cases your application will be considered in a separate pool of applications devoted to only new investigators. This “levels the playing field” and prevents your application from competing directly with applications from more seasoned investigators. While not a significant disadvantage per se, as noted above, even large-scale R grants rarely cover all or even most of one’s salary. This is unlikely to be a problem in environments where other “hard” or “soft” funding is available, but should be a consideration where one is expected to cover a large portion of their salary as an R grant alone will not be sufficient in these cases.

3.5 Application Types

A large percentage of applications are investigator-initiated (often called “unsolicited”). Investigator-initiated applications can be submitted according to published submission deadlines, most often in February, June, and October. Applications that fall under special interest areas such as HIV/AIDS have different deadlines that accommodate a faster review, so you are encouraged to check these deadlines closely (see https://grants.nih.gov/grants/how-to-apply-application-guide/due-dates-and-submission-policies/due-dates.htm).

Another option is to submit in response to a Request for Applications (RFA). RFAs are meant to stimulate research activity to address NIH-identified high priority issues and areas. They do not utilize regular deadlines and are announced with a specified deadline (often less than 4 months from the announcement). As such, researchers most interested and immersed in these areas of research have a decided advantage because they are likely to have already thought through some of the key issues and in some cases already have available pilot data that could serve as the base for the RFA submission. Of note, these applications typically are reviewed by specially convened panels that are selected based on the specific RFA and are therefore likely to have significant relevant expertise. As one might guess this can be an advantage in that one is getting a review from individuals who are most qualified to evaluate that application. However, an expert also may have particular expectations about how things should be done and may be more likely to focus on esoteric aspects of the application that might go unnoticed by reviewers with less expertise in that area.

One source of confusion can be Program Announcements. PAs are similar to RFAs in that they are issued by one or many Institutes and outline topics that are of particular interest. Like an RFA, PAs provide a level of assurance that the type of research you are proposing will be of interest to the institute that issued the PA. More recently, NIH has begun phasing out PAs and opting to implement Notice of Special Interest (NOSI). A NOSI is a new format for NIH Institutes/Centers to share and update their research priorities. NOSIs are intended to replace PAs that do not have special review criteria or set-aside funds. Each NOSI describes aims in a specific scientific area(s) and points to Funding Opportunity Announcements (FOAs) through which investigators can apply for support.

4. Preparing Your Application

The following paragraphs outline each section of a typical research grant. We also provide practical guidance regarding a few things to do and not do. Please note that in addition to this information, you can find helpful information on preparing your application at: http://grants.nih.gov/grants/writing_application.htm and information on page limits can be found at: http://grants.nih.gov/grants/forms_page_limits.htm. Moreover, we also encourage you to utilize the following link which provide more general tips in a video format: https://public.csr.nih.gov/FAQs/ApplicantsFAQs.

4.1 Project Summary

The project summary is a two-part overview of your proposed project. The first part is the abstract in which you have 30 lines to describe succinctly every major aspect of the proposed project, including a brief background, specific aims, objectives, and/or hypotheses, public health significance, innovative aspects, methodology proposed, expected results, and implications. The second component of the Project Summary is the Project Narrative, which provides a plain-language 2–3 sentence description of your application.

4.2 Aims

Aims provide a one-page statement of your goal, objectives, and expected outcomes and implications. The aims should start with a brief statement of the problem and its public health impact, followed by what is known, and then the gap between what is known and how your project will address this gap. The most important part is the statement of your specific aims and the hypotheses you have for each aim. These statements should be concise and include clear, testable hypotheses. Occasionally, you may include an exploratory aim that addresses an important question but for which enough information is not available to draw a hypothesis; however, these should be used sparingly. You then should conclude with a summary paragraph that also suggests the research directions and implications that this work will spawn. NIH wants long-term not short-term relationships with its applicants. As such, your ability to discuss how this work will not be a single effort but the start of an effective line of research is crucial. A handy template of possible steps to follow in arranging your aim statement is provided in Table 15.1 and we provide guidance on things to consider doing and to avoid doing at the end of this section in Table 15.2.

Table 15.1 Possible template of steps in your aims section

Step 1: Identify the important societal/health problem of focus
Step 2: Review work that has been done towards solving this problem
Step 3: Identify the gap in the literature (e.g., what is missing, next, or even wrong with existing work)
Step 4: Articulate how you intend to address this gap with basic details of your intended approach/method
Step 5: Specify your aims for this research (i.e., specific aims) and what you expect to find (i.e., hypotheses)
Step 6: Highlight the potential implication of this research and how it sets up future studies (and your long-term independent line of research if possible) to make short- and long-term progress towards solving the problem outlined in Step 1

Table 15.2 Tips by grant section

SectionDoDon’t
Project Summary
  • Focus on the big picture

  • Highlight public health significance

  • Include a lot of jargon

  • Get overly technical

Aims
  • Include clearly testable hypotheses

  • End with a paragraph on future directions

  • Be overly ambitious or spread too thin

  • Propose too many exploratory aims

Significance
  • Build a bridge from the problem to your study

  • Tell a clear story, making few assumptions

  • Introduce study without first building a case

  • Wait until the end for public health significance

Innovation
  • Be bold without overpromising

  • Discuss current and future benefits of your work

  • Forget to note any methodological innovations

  • Minimize this section for space reasons

Approach
  • Provide rationale for approach decisions

  • Link expertise of team to strategies proposed

  • Leave out key methodological details

  • Leave out details establishing feasibility

Data-Analytic Plan
  • Include a detailed power analysis

  • Link all analyses closely to the study aims

  • Power only for the main aims and hypotheses

  • Leave out an appropriate consultant

Human Subjects
  • Discuss all aspects of subject safety

  • Focus on inclusion of underserved groups

  • Use for extra space for scientific information

  • Exclude a group without a rationale

4.3 Research Strategy
4.3.1 Significance

This section explains the importance of the problem or critical barrier to progress in the field that the proposed project addresses, and how the project will advance the application of scientific knowledge. In doing so, this section outlines the relevant literature and how this project directly addresses relevant gaps.

4.3.2 Innovation

This section explains how this work takes a new perspective, develops/utilizes a new approach, and/or moves the field in new directions. It is important in this section to emphasize that the novelty is not simply for the sake of being new, but holds important strengths over existing approaches – and sometimes novelty involves nothing new per se but creative use of existing methods or samples. You also should note that innovation can be a slow process and your work can be innovative if it sets the stage for future work. However, in this case it is especially up to you to be clear how your work can be the start of a fruitful and impactful line of research and why that makes the current work innovative. This may be especially true for those conducting pre-clinical or other forms of basic research.

4.3.3 Approach

This section describes the overall strategy, scientific methodology, and analyses to be used to accomplish the specific aims of the project. It is useful to link the approach as clearly as possible to the specific aims and hypotheses. Although there is a human subjects section below, human subjects issues that have important scientific bearing are addressed here. These might include an empirical justification for including only one gender or a theoretical reason to focus on a narrow developmental period in adolescence. Within Approach you also are encouraged to include two subsections. One subsection is Preliminary Studies, which outlines the previous work of you and other members of your research team that support your aims and hypotheses, and establishes that you are qualified to undertake and successfully complete the project. The other subsection is Potential Problems, Alternative Strategies, and Benchmarks for Success, which provides you with the opportunity to anticipate and address the questions that reviewers are likely to ask themselves as they read your application. We discuss the importance of these subsections and strategies to make the most of them below in “Tips.”

4.3.4 Data-Analytic Plan

This section outlines your statistical approach. Here it is crucial to address issues of statistical power and sample size calculation and preliminary analyses before outlining the primary analyses. The readability of this section and the overall flow of the application will be greatly enhanced if the plan is presented in the context of the specific aims and hypotheses.

4.4 Human Subjects

Although it is not placed in the body of the research plan, the section on the protection of human subjects and the inclusion of both genders, children, and underserved members of minority groups is an important part of your application. It should carefully describe aspects of the grant related to the risk–benefit ratio and demonstrate that all necessary precautions are in place to protect the rights and safety of human subject participants. In most R grants this section includes virtually all of the information expected in an application for Institutional Review Board (IRB) approval. This should include strategies to ensure adequate recruitment of underserved groups and a clear statement for why certain groups aren’t included, especially if for methodological reason (which also should be noted in the section on Approach). This section also should include a data and safety monitoring plan as it is now required for all clinical trials (phases I, II, or III) and a monitoring board for larger-scale trials, multi-site trials, and those including vulnerable populations (e.g., prisoner populations).

4.5 Additional Sections

The following sections also need to be included in your grant application: Appendix Materials, Bibliography & References Cited, Care and Use of Vertebrate Animals in Research, Consortium/Contractual Arrangements, Consultants, Facilities & Other Resources, Resource Sharing Plan(s), Select Agents, Multiple PD/PI, and Use of Internet Sites. See http://grants.nih.gov/grants/writing_application.htm for additional details. Additional content sections specific to K award applications include the Candidate’s Background, Career Goals and Objectives, Career Development/Training Activities during Award Period, Training in the Responsible Conduct of Research, Statements by Mentor, Co-mentor(s), Consultants, Contributors, Description of Institutional Environment, and Institutional Commitment to the Candidate’s Research Career Development.

5. Submission and Receipt of Your Application

All applications are submitted through an electronic portal called grants.gov. You should note that your application must be submitted and free of errors by the due date. Therefore, be sure to closely follow all of the rules and regulations governing each aspect of the application to prevent your application from being withdrawn from the review process. Given these warnings, the actual submission process might seem daunting in its own right. However, your research office should have numerous tutorials and provide support to ensure that you complete this part on time and accurately.

Once you have worked with your research office to submit your application on grants.gov, an NIH referral officer will typically assign the application to the most appropriate institute. Although this includes a review of the entire application, decisions are driven by the title, abstract, and to a lesser extent the aims. This process also can be influenced by a cover letter you can prepare with your application indicating which institute you believe is the best fit for the application. The most common institute for psychologists to submit applications is the National Institute of Mental Health. However, it is important for you to develop your idea and then consider the most appropriate institute, which often means branching out to other institutes (for a list of institutes see www.nih.gov/icd/). Once directed to a particular institute, it will be assigned to an Integrated Review Group (IRG) and then ultimately a study section within that IRG. These study sections keep a regular roster of reviewers that rotates every four years. You can get an idea of the study section based on the roster and you may choose to request a particular section in the previously motioned cover letter.

Once your application has received an assignment to a NIH institute and study section, it is given a unique grant number. Shortly thereafter, you will receive a notice documenting this information and providing you with the name and contact information for the Scientific Review Officer (SRO) who organizes the work of the review committee (e.g., distributing applications; assigning specific reviewers; coordinating dates and sites for the three review committee meetings each year). There is a lengthy interval between the time you submit your application and the time it is actually reviewed; for example, applications received on June 1 are typically reviewed in October or November. For this reason, many study sections will accept supplementary materials in the 3–4 weeks prior to review. For example, if you have collected additional pilot data since submitting your application, you may want to provide a brief report about these research activities and results. Such supplemental materials should be brief (e.g., 1–2 pages). To determine whether and when you might submit a supplement, contact your SRO. Supplemental material can be helpful, especially when a new paper is accepted for publication or if new data become available that were not expected at the time of the submission. With that said, we do not recommend relying on supplementary material as “extra time” to add to your application after the deadline. Although often accommodated, supplemental material is not always accepted and more importantly there is no guarantee that reviewers will consider this additional material, especially given that they already will have plenty to cover in the original application.

6. The Review Process

Approximately 6 weeks prior to the review meeting, members of the study section receive copies of all of the applications being reviewed in that cycle. Typically, three members (designated as primary, secondary, and tertiary) are assigned to each application, based on the fit between their research expertise and the content of the grant. Reviewers provide written critiques of the application, organized according to the NIH review criteria: significance, approach, innovation, investigator, and research environment (see Table 15.3 for more detail about the criteria and how best to address them). If sufficient expertise is not available from the standing membership of the committee, the SRO can invite ad-hoc reviewers to participate. However, do not assume everyone or even anyone will be an expert in your particular topic, and be sure that your application does not rely on jargon or make assumptions about reviewer familiarity regarding idiosyncrasies or convention approaches in a particular area of research.

Table 15.3 Hypothetical grant timeline

Grant phaseStartingEnding
Initial developmentOctober 1November 30
Preparing applicationDecember 1December 31
Final preparation and submissionJanuary 1February 10
Grant review completedJune 15June 30
Review comments receivedJuly 1July 30
Plan resubmissionAugust 1September 30
Finalize and resubmit applicationOctober 1November 14
Grant review completedFebruary 3March 29
IRB approval to get in JITApril 1April 30
Council meetsMay 1May 31
Funding startsJuly 1

As the meeting approaches, the SRO will solicit feedback about which grants are ranked in the bottom half of the current group and will not be formally discussed at the meeting (referred to as streamlining). A final consensus about streamlining is usually made at the beginning of each review meeting. Although they will not be discussed at the meeting and will not receive a score, the PI will receive the feedback prepared by each of the three reviewers for the meeting. The rationale for streamlining is to allow greater time for discussion about those applications perceived to be ready for support and thus to maximize the value of the review for both applicants and NIH program staff.

About the top half of applications are discussed at the review meeting. The primary reviewer provides a description of the application and then outlines strengths and weakness in the domains listed above. Each additional reviewer adds any further information and can add new points or issues where they disagree with a previous reviewer. At this point the other panel members can ask questions and raise additional points (although they are not required to have read the application). The group then has a discussion. The goal is consensus, but this is not a requirement and sometimes there can be significant disagreement among the reviewers. After discussion, the reviewers provide scores again. Reviewers may shift scores after the discussion to support consensus but are under no obligation. The remaining committee members then provide their votes anonymously; however, if they are outside of the low and high score by a predetermined range, they are asked to provide a written explanation.

6.1 Core Review Criteria

Your application is evaluated on the following five core review criteria: (1) Significance, (2) Investigator(s), (3) Innovation, (4) Approach, and (5) Environment. For a detailed outline of these criteria with a comparison with previous criteria, see http://grants.nih.gov/grants/peer/guidelines_general/comparison_of_review_criteria.pdf. As we have covered Innovation and Approach thoroughly in “Preparing Your Application,” we focus only on the key features of the other three criteria here.

  • Significance. Is this work addressing an important question and will have an impact on the field in terms of knowledge, application, or in the best case scenario both? It is not crucial that the application be immediately addressed in a submission (especially in more basic research projects), but reviewers will want to see evidence of how this work ultimately could have such impact.

  • Investigator(s). Are you qualified to conduct this project and how well does your team of collaborators (or mentors for career awards) provide specific support in areas where your experience and expertise could be supplemented? For evaluating your credentials, reviewers often will focus on training and specific research productivity. Also, evidence that there is a specific role for the collaborators/mentors is crucial, as is some evidence of past work together or future plans to ensure their participation. This can be best represented in a letter of support and clearly articulated in “personnel justification,” which is an additional administrative section of the grant not covered here.

  • Environment. Can the work be carried out with adequate institutional support and resources? Additionally, are there unique features of the scientific environment, subject populations, or collaborative arrangements that are evident at the research site? These strengths should be clearly articulated in “facilities,” which is an additional administrative section of the grant not covered here.

6.2 Overall Impact/Priority Score

For each of the five core review criteria, reviewers evaluate your application and provide a score from 1 (exceptional) to 9 (poor). Each reviewer then also provides an overall score, also from 1 to 9. There are no clear guidelines to reviewers in how to develop the overall score from the scores for the core areas, and it is not meant to be an average or median score. Moreover, your score can be influenced by several other additional criteria including human (or animal) subject issues. Reviewers can make recommendations about your budget, but these recommendations should not affect your score.

From the overall scores of each reviewer as well as the other committee members, a normalized average is calculated and multiplied by 10 to provide a final priority score from 10 to 90, where 10 is the best score possible. As much as we’d like to indicate a range of likely fundable scores, there just simply aren’t hard rules that apply in all cases across all institutes (but for some guidance see: www.nlm.nih.gov/ep/FAQScores.html). With that said, many PIs would be quite pleased with a score under 30.

7. Post-Review Strategies

Often within a week of the review meeting, you will be informed via eRA Commons about whether your application was scored, and if so, the priority score. The written critiques are organized into “summary statements” (still called “pink sheets” by some older investigators because of the color of the paper originally used). Approximately 4–8 weeks later, you will receive this summary statement, which includes a brief account of the committee discussion as well as the written comments provided by separate reviewers. A new feature is that reviewers can now make additional comments that will be made available to the PI. Sometimes it is difficult to read between the lines of reviews and these comments are an opportunity to provide direct recommendations about the overall viability of the project and particular methodological issues.

At this point several things can happen. If your application was scored it will go to a “Council” meeting (the second level of review) where the quality of the SRG review is assessed, recommendations to Institute staff on funding are made, and the program priorities and relevance of the applications are evaluated and considered. If your application was unscored or it went to Council, but was not recommended for funding (or it was recommended, but for one reason or another such as budget issues ultimately wasn’t funded), then you can consider resubmitting. Of note, before the Council meeting you will receive a request from a member of Grants Management staff for the following additional documentation, referred to as “Just-in-time information” (JIT): updated “other support” for key participants; the status of IRB action on your proposal; certification that key personnel have received training in the protection of human subjects. This request for additional information is not an award notice, although it is encouraging because it represents a critical step prior to the notice of grant award (NGA).

It can be difficult to decide on what to do next if the original submission is unscored. The first thing to do is to avoid the very real feeling that you and your grant have been rejected. Without question your grant being scored is better than it being unscored and in some case reviews will indicate serious problems that might not be addressable without considerable reformulation or even at all, but in most cases the eventual fate of this research project is in no way doomed by an initial submission being unscored.

An unscored grant may be revised, resubmitted, and eventually funded, but you should read the reviews carefully and with an open mind to help your decision. There is no simple formula to determine whether you should resubmit. Ask yourself several questions: Do the reviewers acknowledge the importance and innovation of the proposed research? Do they credit you, the PI, with having the appropriate background and abilities to accomplish the work in the area? Are their scientific concerns ones that you can effectively address? If the answer to each of these questions is “yes,” then you should strongly consider resubmission of a revised application. Many of us have had the experience of going from an unscored application to a funded grant award upon resubmission. However, it’s important to be honest with yourself about what is realistic. Talking with a relevant program officer also may be helpful to discuss next steps, especially if they were in attendance at the review committee when your application was discussed and can offer insights from the discussion.

We don’t mean to suggest that it is easy to objectively consider reviews of your grant. Particularly on a first read it is easy to jump to assume a reviewer (or all of the reviewers) clearly didn’t understand the grant and that their points are all wrong. This is a natural reaction and probably is rooted in important self-preservation in other important ways. However, an honest assessment requires stepping away from the grant and the review for a few days, considering that the reviews probably have a lot to offer. Moreover, the first set of reviews will always have an impact on the review of your resubmitted grant, and resubmitted grants that do not heed critical feedback almost never succeed. It is also important to read between the lines. In most cases, a poor score with few accompanying comments that aren’t really addressable is worse than a poor score with many comments that are addressable. Unless you are sure you understand why the grant was unscored, and what you can do to meaningfully improve it, you may want to consider that it may be hard to change reviewers’ minds and that it may not be best to resubmit.

If you do decide to resubmit, possibly the most important part of your revised application is the single page you are given to address all reviewer comments, called the Introduction to the Revised Application. The success of your application will be greatly influenced by the thoughtfulness of your response to the reviewers outlined in this page. Although your revisions will be reflected in the application (we recommend doing so with underlining as opposed to bolding to save space if needed), it is crucial to show that you understand and have addressed the reviewer points. And in rare cases where you disagree with the reviewer point, it is crucial here to address the spirit of the point, and make a clear theoretical, empirical, or practical argument to defend your choice. Although mindlessly agreeing with reviewers or other empty attempts at pandering will certainly not help your case, declining reviewer suggestions should not be undertaken lightly. Also be sensitive to the “tone” of your response, because the reviewers most certainly will be!

Finally, if your application was unscored (or in some unusual cases scored) and you are not optimistic about your likelihood of significantly improving your chances for funding in a resubmission, then you can consider going back to the drawing board and developing a new application. By new it can be entirely different with a new focus and aims, but it also can be similar in rationale and goals but also meaningfully distinct from the original application with these differences possibly manifesting in the focus on the question, the methodology used, or the specific way the larger question is addressed. Although there is no official connection between these applications, the good news is that the new application often benefits from your experiences in preparation and review of the original application.

8. Tips
8.1 Respect Deadlines

For many individuals deadlines are crucial to setting goals, staying on task, and not losing motivation. Be aware of the deadlines and what goes into getting things done in a timely manner. Be more conservative with things that rely on others, such as letters of support or analytic sections prepared by a statistician. With that said, deadlines can have their drawbacks, because they can lead to procrastination and a burst of work near the deadline, without ample time to run ideas past others and have a sufficient pre-review of the application from collaborators and potentially helpful colleagues. For this reason it can be useful to utilize a timeline for each step along the way to submitting your application. As can be seen in Table 15.3 illustrating a mock timeline, it can take nearly two years from the start of idea development for a grant to actual funding should resubmission be needed (as it most often is). This timeline illustrates the previously stated notion that grant funding is more of a marathon than a sprint.

8.2 Ensure Feasibility

While you want your application to be methodologically rigorous and have high impact on your field, you cannot lose sight of feasibility. The reviewer code word used when there are doubts about feasibility is “over-ambitious,” and it is a clear kiss of death when this term is used to describe an application. Therefore, keep your specific aims focused, and make hypotheses that you can clearly tie back to theory and/or pre-existing data. Consider the necessity of multiple studies within a single grant. Although these can be quite elegant, the connection between studies can provide many pitfalls, especially if subsequent studies rely on particular results from initial studies. Remember, although your passion for your research area may be strong and your intellectual curiosity high, each grant application represents only one small step in a research career that may last for several decades. Try not to be ruled by emotions (especially when receiving and responding to critical feedback) and keep a clear eye on your long-term goals. Persistence, patience, and creative problem solving are usually critical ingredients in the career of a successful independently funded investigator.

8.3 Be Clear

NIH clearly states that you cannot have any contact with reviewers before, during, or after your review. Therefore, the only way you can get your point across is the extent to which you communicate with them in the application. Within the section on Approach, the subsections on Preliminary Studies as well as Potential Problems, Alternative Strategies, and Benchmarks for Success provide a great opportunity for this. For the subsection on Preliminary Studies, you can make your case that you have sufficient background (and pilot data especially for an R01) to conduct this work and that it marks a logical next step in this line of research, both for you and for the field in general. For the subsections on Potential Problems and Alternative Strategies (previously referred to as design considerations), this is your chance to walk reviewers through the highly complex discussions you and your collaborators had when you determined the best decisions for the application. This is an interesting section and presents a real opportunity because some applicants largely ignore it and at best tell the reviewers essentially “don’t worry we know what we are doing” or “we’ve got it covered.” As a new investigator, it is up to you to ensure that the reviewers understand the decisions you made. This section also increases the odds that the primary reviewer can best present your application and that others reading can quickly understand some of the key features of your application. Think of it as giving reviewers access to all the critical thought that went into the strategies you ultimately chose (as well as those you didn’t choose). Finally, your Benchmarks for Success show a level of sophistication and often can help ameliorate any fears about feasibility. This section would benefit greatly from a table that outlines the planned activities of the grant and the deliverables at each time point.

8.4 Show You Know the Literature and Your Work is Adding to it

Especially as a young investigator, your research team is crucial and it is important for you to clearly highlight their role in your application. For K awards, mentors are especially key elements of the successful application. It is critical to tell a clear story of each person’s role in your training, with as much detail as possible. Explicitly, it is not enough to simply list the “right” people. It is necessary to explain who they are and why they were chosen, show that you will have the right training experience with them, and describe how each mentor will contribute to your career development.

For R grants and the young investigator, the role of collaborators can be a bit more ambiguous. In some academic settings, you may experience a tension between the traditional value placed on independence and the emerging growth of team-based or multidisciplinary science where it’s no longer expected (or even possible) for one individual to master all elements of a complex research project. In fact, at NIH it is usually expected that applications will include a team of experts representing different domains. For example, in applications related to mental health and addictions it is common to see psychologists, psychiatrists, statisticians, anthropologists, epidemiologists, neuroscientists, economists, etc. collaborating together. A true research team will involve well-selected experts that can work well together, each contributing unique and relevant expertise to the proposed project. It is crucial to clearly articulate the key parts of the application and the role that each collaborator plays in those parts.

8.5 Trust the System and Put Your Best Work Forward

As mentioned above, there are strategies to increase the odds of funding such as trying to steer your application to the most appropriate committee, “guessing” what likely reviewers might want, and talking to program staff to avoid making mistakes or proceeding in a negative direction. However, you should be careful about these efforts becoming more about gaming the system than developing the best application for you. It is important to note that for every great game player, there is a straight-shooting scientist who has a strong sense of their interests, is willing to find a mechanism in NIH that accommodates that interest, makes efforts to align their interests with that of NIH including RFAs and PAs but does not let this betray their own actual interests, and simply allows the process to play out. This is not to say that some strategizing is not warranted, but when the strategies approach more of a game-like level, they hold as much likelihood of backfiring (or simply being irrelevant) than actually helping.

9. Final Words

In conclusion, the NIH grants process can be frightening and exhausting, and sometimes the secrets to securing them can feel quite elusive. However, your biggest weapon in this battle is knowledge to give you both the direction you need to be most effective in developing your application as well as the confidence to endure the ups and downs of the process. This is simply one of many available resources and we encourage you to utilize as many as possible as you begin to develop your own style and secrets to your success!

16 On Being a Woman in Academic Psychology

Kristen A. Lindquist , Eliza Bliss-Moreau , June Gruber , & Jane Mendle

The goal of this chapter is to offer a candid snapshot of what it’s like to be a woman in modern academic Psychology and Neuroscience. We also hope to generate conversation around shared experiences and provide a vision into a more equitable path forward for women in our field. By academic Psychology, we mean careers focused on research and teaching in the fields of psychological science or Neuroscience. We are most directly speaking to careers that are housed in universities, colleges, or research institutes, but of course the issues we discuss are not unique to those places (or even Psychology or academia, more specifically).

We are four mid-career psychologists who identify as women, have held appointments and worked in Departments of Psychology and Neuroscience, Human Development at major universities, research centers housed within universities, and have also worked in clinical and academic-oriented medical schools. Our research collectively spans areas of Psychology that include questions in social, affective, clinical, developmental, Neuroscience, and comparative perspectives in the field. Oh, and we’re writing to you in the midst of a pandemic that has caused seismic shifts to health and well-being, financial stability, and work–family dynamics that intersect with gender and other identities in important, and unprecedented, ways (Reference Guy and ArthurGuy & Arthur, 2020; Reference MinelloMinello, 2020; Reference Minello, Martucci and ManzoMinello et al., 2021).

We’ve each been interested in women’s issues for many years; those interests have piqued as we’ve moved through different phases of our lives and careers, through the BA, PhD, clinical internship (where applicable), postdoc, junior faculty, and mid-career years. During these different phases of academic Psychology, we have experienced first-hand how this process might (and might not) be different for women. One would imagine that fields now dominated by women such as Psychology would have overcome the longstanding gender disparities that affect many workplaces. Yet our own personal experiences suggested that women continue to face disparities in the field, and this led us to take stock of and synthesize the literature on different academic outcomes and productivity for women. Specifically, we decided to systematically investigate the science on gender parity in science, more broadly, and Psychology, in particular. Our paper united 59 women psychologists across major universities and the business sector spanning the US, Canada, and Australia. Authors are former Presidents of major scientific societies, Department Chairs, “Genius Award” winners, public intellectuals and TED speakers, best-selling book authors, and highly respected scientists. Yet our analysis revealed that many gender disparities are still alive and well in Psychology, despite the success of this set of female authors (Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al., 2021). We’ll be drawing from those data here with a sprinkling of personal anecdotes where applicable.

Our task in this chapter is to share with you what it’s like to identify as a woman in (academic) Psychology. So we decided to walk you through the good, the bad, and the really bad, and with hope for a better future for women in our field. Why is identifying as a woman even meaningful for your psychological career, you might ask? Oh, but we wish that it wasn’t! Psychology (and to some extent, Neuroscience) has seen a huge influx of women since the 1970s and 1980s. In fact, so many women have entered the field in the past few decades that one might assume that Psychology is immune to the gender-related problems faced by other Science, Technology, Engineering, and Math (STEM) fields. That is, as relative minorities in STEM, women’s career outcomes and sense of belonging significantly lags behind that of men. Yet as we recently revealed in Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al. (2021), the relative representation of women in Psychology does not make it immune to gender-based disparities in career outcomes.

Throughout this chapter, we take an evidence-based approach by using the tools of our own science to evaluate questions about whether women do (or do not) face gender-related challenges in academic careers in Psychology. Overall, the good news is that women are entering careers in academic Psychology at record rates. They are becoming assistant professors and associate professors at unprecedented rates. The not-so-good news is that they still trail behind men in terms of numbers of papers published, grants held, impact, and financial compensation. As we discuss below, the reasons for these differences stem from a host of systemic, and interpersonal factors (some of which we, as women, can control and effect change on!). The really bad news is that both blatant and subtle sexual harassment and other forms of bias (racism, classism, homophobia, etc.) that intersect with gender still exist, and still impact people’s careers and well-being. We’ll close by discussing what we think we can do about it and how you can make decisions that optimize both your career outcomes and your well-being.

1. An Introduction and Some Caveats

Before we discuss the evidence, we want to begin by introducing ourselves.

Kristen Lindquist: I’m an associate professor at the University of North Carolina at Chapel Hill. I am in the Department of Psychology and Neuroscience and am also a faculty member at the Biomedical Research Imaging Center in the School of Medicine. I direct the Carolina Affective Science Lab and teach courses on Neuroscience and Social Psychology. I got my PhD in Psychology from Boston College and did a joint postdoc in Neurology and Psychology at Harvard Medical School/Harvard University. I focus on how the brain, body, and culture alter emotional experiences and perceptions. I’m married to an academic psychologist (i.e., have a “two-body problem”) and we are parents of a preschooler and a toddler.

Eliza Bliss-Moreau: I’m an associate professor of psychology and a core scientist at the California National Primate Research Center, both located at the University of California, Davis. I train graduate students in our Psychology and Neuroscience graduate programs, but also in our Animal Behavior and Animal Biology graduate programs. I did my PhD in Psychology (social, affective science) before transitioning to a postdoc in neuroanatomy working with animals. My group studies the evolution and neurobiology of emotions and social behavior, using a comparative approach – somewhere in the ballpark of 85 percent of our work is with rhesus monkeys, although we work with other species as well (humans, agricultural animals, and other animal models).

June Gruber: I’m an associate professor and licensed Clinical Psychologist in the Department of Psychology and Neuroscience at the University of Colorado Boulder. I was previously an assistant professor at Yale University after completing graduate school. I got my BA in Psychology and PhD in Clinical Psychology at the University of California Berkeley. I am interested in understanding the connection between emotions and severe mental illness, as well as the science of happiness and positive emotions, more generally. I run a laboratory and teach classes to undergraduate and graduate students focused on positive emotions and mental health. I grew up in California in a working-class family (my mother was a travel agent and my father was a salesman) and was the first member of my family to attend graduate school and experience what life in academia was like. I’m also married to an academic; we met as undergraduates in a philosophy class together and endured several years of long-distance to secure careers together. Much of my own understanding of gender biases first became palpable while I was pregnant and on maternity leave (with my now two young boys: 6 and 4 years old).

Jane Mendle: I’m an associate professor in the Department of Human Development at Cornell University. I got my PhD in Clinical Psychology at the University of Virginia. I study psychopathology during the transition from childhood to adolescence. This is a pivotal time for mental health risk and vulnerability, and I’m interested in why that’s the case.

One of the many caveats we should note is that we bring our own identities to the table here. We are white, cis-gendered women who chose to pursue careers in academic Psychology. That already means that our experiences are not going to be the same as all women’s experiences in this field. We are also talking about academic Psychology specifically, so we don’t discuss primarily clinical careers (such as being a psychotherapist, social worker, or counselor) or primarily education careers (such as teaching classes full-time or serving as full-time administrators overseeing a campus-wide curriculum), and we don’t discuss industry or government careers that are increasingly available to Psychology PhDs. It’s also important to underscore that although we have a range of scholarly and personal backgrounds, our experiences are by no means representative and universal. Our perspectives are simply that – personal viewpoints that might not be shared by others even with similar experiences. Moreover, we certainly cannot speak personally to the experiences of women with other intersectional identities such as Black, Indigenous, Women of Color (BIWOC) and women of the LGBTQIA community. We know through both personal connections and the empirical evidence that BIWOC scholars and scholars who identify as LGBTQIA face additional challenges that we do not (see Reference Carter-Sowell, Dickens, Miller, Zimmerman, Ballenger, Polnick and IrbyCarter-Sowell et al., 2016; Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al., 2021; Reference Zimmerman, Carter-Sowell and XuZimmerman et al., 2016). We reached out to several women scholars who identify as having other under-represented identities in academia, and they all graciously declined to join in on this effort because their workload right now was already too large. We know from the literature that women with other intersectional identities are especially likely to be burdened with service as the token representative of their identity in a department or even a school (see Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al., 2021). This fact is likely especially exacerbated right now, due to the diversity, equity, and inclusion movements that are happening in Psychology departments and universities around the US following the racial equity protests that happened in summer 2020 and the disproportionate effect of the COVID-19 pandemic on communities of color.

Another caveat is that we are mid-career and tenured academics, meaning that we no longer face the same pressures as more untenured and more junior women (although they’re not so far in the distant past for us, especially for Bliss-Moreau who due to her longer training in Neuroscience and Neurosurgery has only been a tenure track faculty member for about four years). Given that we are also not yet senior faculty or full professors, we also lack the longer view of what it was like to be a woman in this field when women were in the extreme minority, because the data show that things have steadily changed for the better with regard to gender representation in Psychology departments since roughly the 1980s (Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al., 2021). That being said, being mid-career offers us the unique advantage of “being in the thick of things” – we all balance research, teaching, and increasing service loads amidst all of the challenges of mid-life, including, but not limited to, caregiving.

A final caveat lies in how we use the term “women” throughout this chapter. Neither sex nor gender are binary and we use the term “woman” here to refer to anyone who identifies as a woman. We acknowledge that the disparities we discuss are compounded for those who identify as non-binary, trans, or otherwise embody other marginalized gender or sexual identities, and we note that there is not a lot of empirical evidence out there about how aspects of gender identity impact career success and progression in Psychological Science. These caveats aside, we will speak to a host of experiences in academia that we think other women will relate to and may benefit from reading more about.

2. A Preface

Before we really begin laying out the good, bad, and the really bad data, we want to set the stage with our own personal experiences. We have in many ways had really positive experiences as we have moved up the career ladder. We authors are all a testament to the fact that women can and do succeed in academic Psychology (and yes, we recognize that there is inherent survivorship bias in our narrative as a result). We all ascended through PhDs, postdocs/internships, and got competitive tenure-track jobs. We all have tenure at our respective schools, which span well-resourced, large, research universities. We all hold government and/or private institute grants and direct labs that do well-cited research. We’ve all been the recipients of early career awards. We should underscore that another piece of good news is that we LOVE our careers – we get to ask and answer big questions through our science. We get to teach and inspire the next generation of psychological scientists and the next generation of citizens, more broadly. We get to be among the thought leaders who use scientific research to weigh in on important societal issues. We have incredible flexibility in often deciding when (e.g., 9–5 or off hours?) and where (e.g., lab or coffee shop?) we want to work and which topics we want to work on. We get to travel to speak with interesting people and work in all corners of the world (at least pre-pandemic). We have meaningful, productive, and largely respectful relationships with our colleagues in our departments and universities, and our colleagues all over the globe. We have found our scientific and personal niches and eventually made our careers what we want them to be. There is a lot to like about this gig.

That said, we have also had many experiences our male colleagues likely have not. We have had other academics comment on our clothes, breasts, legs, hair, the size of pregnant bellies and our weight. In some cases, they’ve touched those things. We’ve been offered positions because a male PI stated he was interested in hiring an “enthusiastic female postdoc.” There’s been weird unwanted kissing, where you think “is he just a little drunk and is trying to be avuncular?” before you reminded yourself that no one should EVER kiss anyone they work with without consent. During faculty interviews, we have had a Dean highlight the existence of local “high-end women’s clothing stores” as one of the major pros of accepting a position at that school. When seeking serious career pre-tenure advice with university administrators we have been told not to “wear dangly earrings” and to “dress like other women” in order to succeed. There must just be something about our wardrobes, because our student evaluations throughout the years have often addressed our fashion sense and niceness as much as our course content. We’ve been asked about our marital status and childbearing plans, in interviews, as “jokes.” We’ve been given formal feedback that we were “too moody” while pregnant. We’ve been bullied while on parental leave, including the day we literally gave birth.

Our demeanor has also come into scrutiny. We’ve had concerns raised when we were not “smiling” in meetings with students and colleagues. During meetings with administrators discussing promotion timelines, we’ve been asked if we wanted “our hands held.” During job interviews, senior male professors have closed the door and patted the chair next to them and said “come on, sit a little closer to me.” Once we had jobs, we’ve been told we should be “thankful” for them and “act happy” in dulcet undertones that clearly imply we didn’t deserve them (and to be clear, we are thankful for our jobs, but we also earned them). We have had “equity” adjustments to our salaries, to bring them in line with those of male colleagues, including male colleagues at earlier career stages. Sometimes this has happened after we have directly asked for a raise and been told that our perception of our worth was not accurate. At the time of hiring, we were not offered as much in terms of lab space, start up, or other financial resources as our same-cohort male colleagues. When we asked for more space, however, we tainted relationships with colleagues because women shouldn’t ask for more. We have been mistaken for undergraduates, graduate students, and administrative assistants because no one assumed that a woman could also be the principal investigator of a lab. We have been censored for speaking up and called “bitches” or “difficult” or “personality disordered” for doing so. We’ve brought babies to campus during daycare snafus and have gotten sideways stares while our male colleagues have been considered “dad of the year!” for the same behavior. Many people who are under-represented in academia have had some combination of these experiences, but talk to most academic women (we certainly have over drinks at conferences or late-night texts with friends) and most have had at least some of these experiences.

Let’s look at the data and talk about what you can do to navigate this, and maybe change things for the better for yourself and the women who will follow in your footsteps.

3. The Good News

Despite the harrowing experiences shared above, we have also had many positive experiences and outcomes. Indeed, our own positive experiences in academic Psychology are echoed in the data. Women are now the majority (>70 percent) in undergraduate Psychology classrooms and many Psychology PhD programs (APA, 2017). This semester, in fact, there are only women students enrolled in Mendle’s advanced psychopathology seminar. If you are a woman who chooses to pursue a tenure-track job, the data suggest that you are just as likely (if not more so) than a man to get that job (we will discuss the “if you are a woman who chooses to pursue a tenure track job” in the next section, because this is key). Other good news abounds. As a woman in today’s academic Psychology, you are also as likely as a man to get tenure and to receive the grants that you apply for (again, caveat being, you get the grants you apply for). In our computations of major Association for Psychological Science (APS) and American Psychological Association (APA) awards, we also found that women are roughly as likely as men to be recognized for their early career research (but not their later career research; see Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al., 2021 for all these statistics).

This news is great, and certainly represents a shift from a time in the not-so-distant past (e.g., the 1970s and 1980s) when women were less likely to enter PhD programs, get hired into tenure-track positions, and to become tenured as their male counterparts. Yet, the good news can obscure the bad, which is that gender disparities do still exist in Psychology. A quick look at rates of gender representation in the field at large – or even a glance through some faculty line-ups – might give the false impression that women’s careers are on par with men’s in academic Psychology. We’re sorry to say this is not (yet) the case.

4. The Bad News

The bad news is that women still face systemic, interpersonal, and intrapersonal barriers that contribute to disparities in success in academic Psychology. In fact, for almost every piece of good news there is a “yeah, but … ” qualification. Let’s start unpacking those “yeah buts ….” As we review in Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al. (2021), although women are getting hired and tenured at equal rates as men, they still, on average, lag behind men in almost every other metric of career success. For example, women in psychology apply for fewer tenure-track jobs, they hold more low-status academic jobs (e.g., as adjunct professors, university administrators), they publish less, they are cited less, they submit fewer grants, they are invited for fewer talks, they are seen as less “eminent,” they are financially compensated to a lesser extent (even when controlling for productivity), and they likely do more unpaid service than men.

But why? We know that these gender disparities don’t exist because women are less intelligent than men. That hypothesis has been laid to rest (see Reference Ceci, Ginther, Kahn and WilliamsCeci et al., 2014 for a discussion of, e.g., the lack of evidence for strong gender differences in math). It’s also the case that women and men’s academic products tend to be considered of comparable quality when compared head-to-head (e.g., women’s grants are rated as good, if not better, than men’s; Reference Hechtman, Moore, Schulkey, Miklos, Calcagno, Aragon and GreenbergHechtman et al., 2018), meaning that women’s lesser productivity is not a product of lesser scientific capacity. As we conclude in Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al. (2021), a mix of systems-level factors, interpersonal processes, and intrapersonal processes likely contribute to the differences observed in women’s versus men’s career success. Let’s unpack some of the ways that these factors might play a role in women’s career success.

4.1 Academic Pipeline

First and foremost, let’s address the fact that fewer women than men apply for tenure-track academic positions. This is almost surely a product of a “leaky pipeline.” A “leaky pipeline” describes a systematic exit of certain people from the career path. Gender-related pipeline leaks are well-known in science (Reference AlperAlper, 1993), and it is possible that Psychology is “leakier” or worse than other fields when it comes to our pipeline because we start with so many women interested in our undergraduate major. Yet at each stage from undergraduate, to PhD, to postdoc, to faculty positions, women drop out of the field at rates disproportionate to men (Reference Ceci, Ginther, Kahn and WilliamsCeci et al., 2014; Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al., 2021). Meanwhile, women are over-represented in adjunct professor positions, university administration, and in fields outside of research such as education and healthcare (APA, 2017; NCES, 2013), suggesting that women are systematically “opting-out” of tenure-track academic Psychology. As we note above, when women do apply for tenure-track jobs, they are more likely to get them than men (both in large observational studies and in experiments; see Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al., 2021), so that’s good news. As we mention above, another source of good news is the fact that women who go up for tenure are now as likely as men to get it. Yet, women lag behind men in the rate of being named full professor, which is the highest (non-administrative) academic rank post-tenure. It’s not clear based on the data whether this is just a time lag, and we’ll see more women fill the ranks of full professor in the years to come, or whether women are getting “stuck” at the mid-career associate level rank and are not moving to more senior level positions and recognition (Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al., 2021).

Why is it that women leak from the pipeline at a greater rate than men, and why are there not more women in full professor roles if women have been filling the pipeline in large numbers since the 1980s? One possibility is that women do not want academic Psychology jobs and never did – in this scenario, women are leaving the field at each juncture because they are choosing careers that they prefer more. That is of course a reasonable interpretation, especially at the undergraduate level, where many may see Psychology as a great generalist major that will prepare them for other careers. Alternatively, you could argue that this means that we’re missing out on the opportunity to convince more women that they might like to go on to become scientists in our field. Either way, this doesn’t address why the pipeline leaks following a PhD, or especially a postdoc, when trainees have gotten further along the career path toward becoming professors. Don’t get us wrong: it would be great if women were actively choosing careers that they most prefer. But we suspect that women are, at least in some proportion, being forced to “opt out” of the pipeline due to a combination of factors at various levels. We review these pressures in full in Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al. (2021) and point to a couple of especially important ones here.

Let’s start with the systemic factors that might be at play in the pipeline. Systemic factors are those related to the values, norms, and institutions that our society creates that in turn impact interpersonal and intrapersonal behavior. One major systemic set of values and norms – that in turn shape our institutions – are gender role expectations. Gender roles are prevalent cultural stereotypes about the behaviors, personalities, and occupations that women and men should engage in and hold (e.g., Reference Wood, Eagly, Olson and ZannaWood & Eagly, 2012). Gender role stereotypes have a major impact on the institutions in which we work and live. The fact of the matter is, our academic institutions (and work institutions, and political systems, more generally) were not created with caregiving in mind. Yet due to gender-based stereotypes related to caregiving and the biological practicalities of childbirth and early child rearing, women are expected to be – and frequently are – the primary caregivers of others in our society. Childcare, eldercare, and care for extended family and community members in need often falls to women. Our society expects, and frankly, benefits from, the largely free caregiving labor expended by women. For instance, when it comes to childcare, American mothers spend roughly 75 percent more hours per week on childcare than fathers (14.0 vs. 8.0 hours; Reference Geiger, Livingston and BialikGeiger et al., 2019). Women are likely aware of these realities – in fact, they have been implicitly or explicitly faced with them since early childhood and both women and men see academia as relatively incompatible with raising children (Reference Mason, Wolfinger and GouldenMason et al., 2013). The difference may be that (some) men expect that they will have a spouse who can pick up the slack with regards to childcare while they focus on their academic career. As most academic women are part of dual-career partnerships, they are much less likely to have this luxury than their male counterparts (Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al., 2021). What seems key to women opting into academic careers is seeing other women navigate both a career and kids. When women PhD students interact with women faculty who have children, they are more likely to pursue academic careers (Reference Mason, Wolfinger and GouldenMason et al., 2013), underscoring the importance of representation in science and having access to women mentors (see Reference Lindquist, Gruber, Schleider, Beer, Bliss-Moreau and WeinstockLindquist et al., 2020).

These realities are not unfamiliar to us at all. We have all struggled in some way with trying to combine our career trajectory with our preferred geographical location, dual careers, having children, and/or taking care of elderly parents. These are realities that at least some of our male colleagues don’t face in the same magnitude.

Mendle: When I was younger, I was either unusually lucky or unusually oblivious. I didn’t think much about being female in high school. In college, I loved my women’s and gender studies classes, but I didn’t perceive barriers related to my own gender. Ditto for graduate school, where I had a wonderful (male) advisor and cohort of friends, with whom I would occasionally discuss practicalities and observations about being a woman in academia – but again, rarely perceived substantial barriers. Then I became a faculty member and, boom, the fact that I was female was suddenly an enduring part of daily life. People noticed my gender more than I had ever assumed and I, consequently, began to react and respond in turn.

There have been various inflection points in my career, when I’ve thought more or less about gender. I hate to be trite, but becoming pregnant was one of those points. I study puberty, and one of the important aspects of puberty is that it places a big life transition – full of dramatic physical changes – on public display, where people comment on and observe it. Pregnancy made me fundamentally rethink my academic research. I wasn’t ready to discuss it with my colleagues – and yet there it was.

Biological sex differences are not typically talked about in discussions of gender and careers. I understand and generally support the reasons for this, but my take is that they do matter. Women’s fertility clocks coincide with the most important years of career building. This places many – not all, of course, but many – women in an impossible position that men don’t have to grapple with in the same way. And, of course, while both male and female academics have babies, it’s generally only the female academic who has to schlep to the weekly doctor appointments throughout pregnancy and the weekly postpartum physical therapy appointments after. Even setting aside the physical tolls of pregnancy and childbirth, the time those appointments take adds up. For every woman caught in traffic on the way to the OB, there is a man who was able to continue his workday. Talking about these facts is complicated, and there can be a competitiveness or an exclusivity to motherhood culture. It can leave out the experiences of women who aren’t mothers and even hint that the biases they’ve encountered might be somehow lessened because they don’t have or don’t want to have children. I take this as a reflection that we still haven’t solved the real issue of how to make careers and life choices happen in a way that feels right or manageable for a lot of people.

Lindquist: I echo Mendle’s comments that I didn’t really think about my gender until after I received my PhD. I had a really inspiring female mentor with a really successful career and we of course talked about how gender had impacted her career throughout my training, but I hadn’t experienced the effect of gender in my career first hand until I was a postdoc in a Med School and it suddenly felt blatantly obvious that I was female. I remember a prominent neuroscientist running into me in the hallway – I was in awe of his research and was so excited to get to talk to him about mine – and instead of asking me about my research, he looked at my finger and said “oh, so you’re engaged!” To this day, I am so disappointed by that conversation.

It was around this time that I really began to stress about whether I would find jobs for both myself and my academic partner – especially ones that we were both happy with, where we were equally valued and where our careers would be equally fruitful. We were super lucky to land positions at the same university. Our motto was always to take short-term costs for long-term gain (we lived apart for years to both pursue opportunities that best fit our careers) and we worked really hard for our positions but we both still feel like we won the lottery to solve the “two-body problem.” Of course, then we had to deal with trying to figure out how to get our careers in a place where we felt we could have kids, and to try to rear children in a way that was equitable across both our careers. I recall calling up my graduate advisor (another woman) and asking advice about when during my tenure clock I should try to get pregnant. Should I stop my tenure clock? How many grants should I submit before I have the baby? How much time should I expect to really be able to take off? Will my lab fall to pieces if I suddenly cannot spend 100 percent of my waking hours thinking about it? It felt like there were no resources to navigate this next stage. And despite the fact that my husband was experiencing the same thing, it felt especially fraught for me as a woman.

Ultimately, my advice is that these things are going to be challenges in most high-stress careers and are not unique to academia. I have friends who are lawyers, or work in large corporations, and they are no less hindered by their gender or childbearing decisions. Ultimately, neither workplaces nor governmental policies have done enough to support working families since women have entered the workforce in large numbers – it is still largely assumed that workers have an extensive support system to pick up the familial slack while they work on their careers. For academic women in particular, their childrearing years coincide with the PhD or Assistant-to-Associate Professor years, which is a key time for establishing one’s career (see Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al., 2021). And as much as we want to assume that men are hindered by childrearing in the same way, they just aren’t. As Mendle suggests, even starting with the process of conceiving a pregnancy (if you are able to and choose to give birth to a child), there are undue costs on the person carrying that baby. The many hours that I spent between two pregnancies and infancies vomiting, lying exhausted on the couch in my office in between meetings, going to doctor appointments, and breastfeeding and pumping breastmilk were all hours that my male spouse was able to do his work (and he is a great dad who otherwise does 50 percent of the childcare. He just couldn’t really help with the whole gestation/lactation part of it). My best advice is to be aware of the hurdles that come with your biological sex/gender and to make sure to choose to surround yourself with people who will fully support you in the child-rearing process, whether that is a partner, grandparents, or hired help. Be aware of the gender stereotypes that will put most of the caregiving responsibilities on you as a woman, and have frank and open conversations with your caregiving partner(s) about how to equally divide up tasks and time. Don’t go into it blind – you should be aware before you commit to a relationship whether your partner actually has equalitarian views on splitting work and family.

Gruber: I echo comments above by Lindquist and Mendle about early life experiences being distinct from, and seemingly absent of, gender biases that emerged later on in my career. I was perhaps naive in a Pollyanna way as a teenager. In high school, I firmly believed that accolades were awarded based on merit and women would and could achieve the same level of respect and recognition as their male peers. I read with great passion about women writers and poets, and idolized my high school calculus, physics, and English teachers who were all brilliant and women. It seemed as if women could do anything.

I held tightly onto these idealized visions of women’s roles in the professional world during graduate school, where many of my peers in clinical psychology were also women. I was also very lucky to have incredible women mentors as well. In the blink of an eye, however, when I became pregnant later on in my career, things began to shift. My status as a woman became more visible (literally so while pregnant), but not usually in positive ways. I was questioned about my decision to take parental leave and to pause pursuing full-time academic work so I could be present with and raise my babies. I was criticized for being unavailable during my FMLA (i.e., legally granted) parental leave. When I asked an organizer of an invitation-only conference if I could bring my nursing baby with me, I was told that it was not a baby-friendly event and was uninvited (this led my colleague and I to organize a small conference for mothers of young children to address barriers women with young children face when participating in professional activities). These experiences, however, catalyzed a decision that one of my professional and personal roles thereafter would be to try to shed light and help change the landscape for other women.

Bliss-Moreau: I delayed partnering and childrearing and I’m here to tell you that there are major challenges associated with balancing working life with life-life even if you do not have a partner and children at the moment. In many places there are assumptions that women without families (particularly women without children) should be more flexible in terms of time on call or the times we teach, etc.; and striking the delicate balance of being supportive of colleagues (particularly women colleagues who are often doing a disproportionate amount of caregiving) and taking care of self can be tough. There’s a lot of talk about how “partner hire” or “partner opportunity” programs can be used to bolster women in the academy by ensuring that their partners are able to be placed for jobs (academic or otherwise), but these systems are only really built to work when people are initially hired. My experience was that navigating the dual-academic-career couple trying to secure jobs in the same place where I could do my work (more on this below), once I already had my tenure-track job, was a nightmare. While it was mostly a structural challenge (the systems aren’t built to accomplish what we were trying to accomplish), it was a psychological challenge as well. Being told things like “this would have been easier if you’d been partnered when you were hired” is just a tough blow any way you slice it.

While navigating the system with a partner can be a challenge, not being partnered has a whole other set of challenges that are rarely discussed. Like many of my partnered colleagues, I own a house and have pets – and 100 percent of the efforts required to run the household fall to me. Pursuing a tenure-track job and career ladder often means you have to go where the job is, even if that place is far away from your support network. If you’re making a move like that with a partner, you have support in the form of the primary relationship. The constraints of geographical location were exacerbated for me because I work with monkeys (and big groups of monkeys at that!) and there are very few places in the world where I can do that; I honestly hadn’t given that constraint enough thought when making the decision to retrain in neuroscience after my PhD in social/affective Psychology and would encourage trainees to think through the balance of what one needs to do the science one loves and what that means in terms of where one must live. So, I live in California, while most of my family and friends are on the east coast and abroad. In good times, this was primarily an issue of cost and time and that got easier as my career progressed – I could easily book a flight to connect with loved ones and learned to use that flying time to write papers and grants. But it has been exceptionally difficult during the pandemic, underscoring the importance of having a strong community locally.

Of course, the gender stereotypes that influence systems also impact interpersonal behavior and a person’s own beliefs. These in turn influence who sees themselves in certain types of careers and can impact whether women opt into academic careers. In America, gender stereotypes include ideas that women are warm caregivers who focus on communal goals whereas men are assertive breadwinners who focus on self-achievement (e.g., Reference Wood, Eagly, Olson and ZannaWood & Eagly, 2012). Women may thus initially be drawn to Psychology majors in unequal numbers because it is a field that – at least stereotypically – is seen as high on stereotypically female qualities, such as caregiving and communion. Psychology is also low on stereotypically male qualities, such as requiring “brilliance” to succeed (Reference Leslie, Cimpian, Meyer and FreelandLeslie et al., 2015). Yet, academic Psychology may prove to have qualities that run against these stereotypes – it can be competitive, requires self-promotion and assertiveness, and does not always have immediate application to communal goals. Gender role congruity theory suggests that these systemic, societal norms interact with others’ and a person’s own view of themselves to predict how interpersonal processes unfold for women as they climb the career ladder.

For instance, gender role congruity shapes how others behave toward women. As women ascend the career ladder, they can get pushback from others for embodying behaviors that are necessary for a scientific career (e.g., being agentic, being a leader, promoting one’s work) but are seen as male-typed. As we review in Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al. (2021) there is indeed evidence that women get pushback for engaging in gender-incongruent behaviour – women who try to hold positions of power and who assert themselves often receive blowback from others. Women who identify with other under-represented identities can experience the additive effect of multiple stereotypes (e.g., the angry black woman, the overly emotional Latina; see Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al., 2021).

Again, we have collectively experienced our fair share of this, ranging from microaggressions to full on aggression.

Bliss-Moreau: The list is long. I’ve had senior men aggress and belittle me publicly in an effort to keep me quiet about important issues. I’ve had senior women attribute mental states to me that I’m not experiencing – typically mental states with gendered content. I am regularly asked whose lab I’m “in.” I’ve been assumed to be the vet anesthetist when I was the neurosurgeon and then had my qualifications questioned because the questioner didn’t believe me (“Too young!” “Clearly inexperienced!”). I’ve been asked about my marital status and childbearing plans during interviews. I am constantly reminded/told/informed that my direct communication style is aggressive, abrasive, and/or angry, when I watch men communicate similarly and be rewarded for being direct.

Mendle: A few years ago, I was editing a special series for a journal, ironically about women’s reproductive transitions. One of our authors requested an extension. She had recently given birth and disclosed that she had some unexpected health complications. The journal had a tight time frame for publication and wasn’t able to extend extra time for her to complete her manuscript. On the one hand, we all understand the realities of publishing and I respect the journal’s decision. On the other, it was a bit bruising to have the author pull out of a series on reproductive transitions because she was, in fact, recovering from childbirth.

Gruber: I can recall several times where attempts to pursue projects or a career in academic Psychology was met with backlash. I recall co-leading the Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al. (2021) paper on the future of women in Psychology, along with Bliss-Moreau, Lindquist, and Mendle (and over 50 other top women academics) and one of our reviewers mentioned that the paper lacked true “leadership” and couldn’t be accomplished by a team of women-only authors. When we responded with detailed descriptions of the unique and authoritative contributions of our author team, we were met by skepticism as to whether our co-authors deserved authorship and scrutiny of our scientific integrity. At a personal level, I’ve been told my mind and my work was “superficial” while simultaneously warned by male colleagues that I was “too ambitious.” I worked hard once leading a project for members of my field and one colleague responded by saying “who do you think you are?” I understood that loving what you do and wanting to work hard while being a woman was not a satisfactory answer.

Lindquist: Honestly, there are so many examples, it’s hard to choose. I think it is better now that I’m a bit older. But early on, when I was a young woman it felt like no one could ever possibly imagine that I could be in a position of power, never mind know what I was talking about (although maybe when I’m an old woman no one will be able to believe that I could still be contributing meaningfully to society … we’ll find out!). One microaggression stands out when I first started as a faculty member. I attended a meeting for users of our research computing clusters on behalf of some of my other neuroscience colleagues. So here was a young woman asking a question to a tech person about computing. The guy who worked for research computing kept asking me “so whose lab do you work in?” Every time I responded that I didn’t work in anyone’s lab and that I was a member of the faculty, he kept saying “Oh, do you work for [senior male colleague?]” In a bout of frustration, I eventually burst out “I work in my own lab! I have my own lab! I am a FACULTY MEMBER!” Now, I had become “the angry woman” – a stereotype with its own baggage.

Gender role congruity also shapes how women see themselves and what roles they are comfortable embodying. Women who have spent their whole life being implicitly or explicitly taught to be submissive, communal, and not overly assertive may feel personally uncomfortable embodying these counter-normative behaviors. Thus, as women ascend the career ladder, these systemic factors may increasingly make them feel like academic Psychology is not for them. This fact is almost certainly exacerbated by the very visible demographic shift that occurs as women go from being surrounded by predominantly women to being the only woman in the room.

Mendle: There are some folks who have always wanted to be Psychology professors. That’s not my story. My first major in college was Medieval and Renaissance Studies (yes, really) and I held a lot of checkered, artsy jobs on my way to graduate school. Even today, I’d rather read a novel than a journal article. In my case, I think having all these other, non-Psychology interests contributed to questions of belonging and sometimes made me wonder if the field was the right one for me. I wish I had known earlier on how normal these doubts are.

Lindquist: I distinctly remember the point in my career as an assistant professor when I looked at my graduate cohort and realized it was mostly me and the men left as assistant professors. I felt like “Where did all the women go? Why am I one of the last ones standing?” In fact, I’ve since become friends with other women in the field just by nature of the fact that we are all women of around the same age “who’ve made it.” I now work in a department where almost all the junior(ish) faculty are women neuroscientists (Bliss-Moreau once called my department a “unicorn” for this fact) and that has been a game-changer for me.

Bliss-Moreau: There are few women at my level or above in non-human primate neuroscience, and even fewer in my subfield (social and affective science). I often have had the same experience as Lindquist – looking around the room and being the only woman present. It’s challenging, particularly in the context of sexist jokes, assumptions about “wives at home” to keep the household running, and other male-oriented comments. On the flip side, because there are so few women in my field, I’ve gotten to know many of them, and often the initial conversation is predicated only on the fact that we’re both female neuroscientists working with monkeys. We are fiercely supportive of each other. I’ve also learned to explicitly ask my male colleagues to be allies and developed deep, rewarding collegial connections with men who are willing and able to serve in that role.

Gruber: I often say to my friends that I love what I study, but often feel that I never quite belonged in academia as a person. This became more difficult when part of who I was involved balancing personal-life choices. I was once planning a visit to my partner who was doing a fellowship internationally on the other side of the globe. I was told by a senior colleague that if I visited them for more than a week or two, even while on fellowship leave, that I would be “destroying” my career. I wondered whether I could continue to pursue this career in academia that would allow me to also be myself.

4.2 Productivity

It’s clear from these data that women are likely opting out of academic Psychology at greater rates than men. But, in cases where women opt into academic Psychology, what accounts for their lesser productivity? We suspect that a similar confluence of systemic, interpersonal, and intrapersonal factors play a role. Take publishing and grant submission, for instance. Across all of academia, women publish fewer papers and submit fewer grants than men (see Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al., 2021). At the systems level, this may be because women are spending their time elsewhere. Childcare is likely one big factor, as we discussed above. However, women also report spending more time on other things besides research when at work, perhaps as a product of systems-level pressures. For instance, in Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al. (2021) we review the fact that women are likely spending more time on unpaid service within their departments, insofar as service is seen as a “communal” activity that is more expected from women. Women also report that they spend far more time on unquantified sorts of service activities such as mentoring undergraduates, discussing careers with graduate students, and generally “taking care of the academic family” (Reference Guarino and BordenGuarino & Borden, 2017). These stereotypes are exacerbated for BIWOC, who may be the “token” member of their under-represented group and seen as the one in charge of diversity efforts in the department. BIWOC and other under-represented scholars may also be expected to, or feel compelled to, mentor students who are under-represented in academia (see Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al., 2021 for a discussion).

Lindquist: As the most junior female faculty member on my hallway at work for some time, I have spent a lot of time having graduate students pop in to ask advice about their research and careers, share their personal and mental health concerns, and even ask for advice about how to navigate issues with their mentors. Don’t get me wrong, I am happy to serve this role for students – and indeed, these roles are really necessary and undervalued by academia and our society more broadly – but it’s not lost on me that my male colleagues are writing papers while I’m offering a teary student a pack of tissues.

Bliss-Moreau: Separate from my formal service commitments (that are pretty easily tracked on a CV), a lot of the service that I do and that I see my female colleagues doing is sort of behind the scenes and hard to document. Like Lindquist refers to above, I spend a lot of time with people who would just pop into my office looking for a conversation on mentoring, career issues, or navigating the politics of our research center. Sometimes those are folks from my own group, but often they are other trainees and staff people at the center. I like those conversations and find them rewarding, and I am certain I would not be where I am today had I not had the opportunity to pop into other people’s offices and have similar exchanges, so I recognize their importance and value. But they take time, and often require emotional energy. A senior female colleague told me that she actually tracks the number of hours she spends in these conversations and reports them at her regular merit reviews because around a quarter of a 40-hour work week is devoted to such interactions. When discussing the number of hours I devote to these sorts of interactions, a male colleague recently told me that I “should care less.” My response: “I could care less if you would care more.” It’s a standing joke among my female science friends that we should make that into a bumper sticker.

One thing to be aware of in thinking through job offers and plans for career trajectories is that universities recognize service work in different ways – the sort of informal work like that described above and/or formal service work. In this vein, I do think that the University of California is really exceptional at least with regards to recognizing formal service work. At Davis, we have a very clear career ladder with steps and regular reviews during which our records are evaluated, and we are promoted anywhere from 1 step (normal advancement) to 2 steps (extraordinary advancement). Evaluating service is part of regular merit reviews and we can be rewarded for exceptional service work with additional steps. In 4.5 years, I have had two reviews (one merit within the assistant professor rank and one to promote from assistant to associate) and in both cases, I was promoted more than 1 step, in large part because of the formal service that I do.

It is possible that interpersonal processes such as bias also contribute to differences in publishing. There is some evidence that editors serve as biased gatekeepers of the science that gets published in their journals. For instance, evidence shows that from 1974 to 2018, male editors at top social, cognitive, and developmental journals were significantly less likely to accept papers that were authored by women versus men; female editors at this time did not show any bias (Reference Bareket-Shavit, Goldie, Mortenson and RobertsBareket-Shavit et al., in preparation). This effect mirrors evidence that white editors are less likely to accept the papers of BIPOC scholars in Psychology (Reference Roberts, Bareket-Shavit, Dollins, Goldie and MortensonRoberts et al., 2020) and suggests that BIWOC may experience an additive effect when it comes to publishing. An unpublished paper in economics suggests that women authors may face a much longer review process than men, during which their papers are held to higher standards (Reference HengleHengle, 2020).

Fortunately, the data on grant review do not seem to show the same degree of bias. Although men hold more grants overall, this appears to be because they submit more grants overall (see Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al., 2021). Studies do not find that women’s grants are reviewed more poorly and, in some cases, women’s grants may even fare better than men’s in review (Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al., 2021). In many grant review processes, there is an evaluation process for the person carrying out the science (“the researcher”) in addition to a review process for the science itself. There is some evidence for bias when the decision architecture encourages reviewers to focus on or foreground the “researcher” versus the “science” (Reference Witteman, Hendricks, Straus and TannenbaumWitteman et al., 2019). Women’s grants are rated as worse when the ratings of the “researcher” are more heavily weighted than the ratings of the proposed science itself, consistent with other evidence showing that women are less likely to be described as “leaders” and “pioneers” in reviews (Reference Magua, Zhu, Bhattacharya, Filut, Potvien, Leatherberry, Lee, Jens, Malikireddy, Carnes and KaatzMagua et al., 2017).

Intrapersonal processes also play a role in women’s productivity. The biggest take-away from the data is that women submit fewer products (be they publications or grants) than men and this may be a product of their own beliefs or preferences. Women may believe that they should spend more time on communal tasks such as service and doing so may take away precious research time (note that this might happen because they feel pressure or because they truly get reward in engaging in these other tasks). Women may also have internalized bias and expect to get more pushback on their work. This can create a cycle of perfectionism in which women take much longer to produce publications or grants. The evidence is perhaps consistent with this interpretation insofar as women’s grants are (at least in some data sets) rated as stronger than men’s but men consistently submit more grants over all (see Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al., 2021). This suggests that women may be taking a different approach to men by placing all their metaphorical eggs in a single, perfectly crafted basket, whereas men are placing eggs in multiple … er … less well-constructed baskets. Finally, as we review in Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al. (2021), there are small but consistent differences in assertiveness and self-assuredness, which may mean that women are just less comfortable than men in “getting their ideas out there,” an internalized gender stereotype that could ultimately hurt women’s productivity.

Again, we certainly have personal experiences of what has seemed like bias in the publication and grant-receipt sphere.

Mendle: I’ve had to become less “precious” about my work over the years. I’m a slow writer. But as much as I love language, pondering each and every word for its lyrical value doesn’t work in the current academic climate. The simple truth is that the field – at this time – is demanding both high quality and high quantity. The best solution I’ve found has been collaborators. When you have the right group of collaborators, their feedback can push your ideas in new directions – and, in the best of scenarios, their skills are opposite your own. I love working with June, for example, because she is a rapid writer and balances my own tendencies in that area.

Lindquist: I do try to reflect a lot on my own productivity and how I ultimately want to – and do – spend my time. For instance, a few of us were recently involved in writing a comment on a paper that had been published in a top-tier journal. This paper drew a lot of criticism across science because it wrongly drew the conclusion that PhD students shouldn’t work with women if they want to have impactful careers. We busted our butts (during a pandemic when we already had limited time) to get this comment out there to correct the published record (see Reference Lindquist, Gruber, Schleider, Beer, Bliss-Moreau and WeinstockLindquist et al., 2020 and a subsequent popular press article about it and the now-retracted original paper at www.wired.com/story/as-more-women-enter-science-its-time-to-redefine-mentorship/). Although it was important to write a formal comment on this paper – and the original paper was eventually retracted – it was definitely not lost on us that as women scientists, we were spending our time responding to someone else’s biased scholarship rather than writing our own papers.

My broader take on publishing and productivity is this: I do think that quality is more important than quantity, and I strive for quality above all else in my lab. That said, I do urge women to recognize that publishing is one of the clearest metrics of success in this field and the easiest to quantify metric. So I urge students and junior women to be especially mindful of how to achieve their idealized productivity while also focusing on ways to shift norms surrounding what is most valued in academia. For instance, former APS President, Lisa Feldman Barrett recently discussed the downsides of the “publications arms race” (www.psychologicalscience.org/observer/the-publications-arms-race) and argued that we had gone too far in expecting junior faculty to have dozens of papers by the time they are applying for their first job.

This said, socialized gender differences in self-esteem and comfort with self-promotion almost certainly contribute to some of the differences in publication rates, impact, and grant receipt observed in the literature insofar as (at least statistically) men submit more papers, may submit to higher impact journals and submit more grants. Women should aim big when thinking about where to submit their work or when submitting grants. Remember cheesy old adages such as “perfection is the enemy of the good” and “100 percent of the shots not taken don’t go in.” I do think that women, in particular, hold themselves to impossible standards (probably in large part because they are expected of us by the rest of society). This might also mean sometimes having to shirk more “female” roles such as being the one to organize a meeting, worrying about the well-being of every student in your department, or trying to take on impossible societal roles such as “the perfect mother.” Another academic friend and I have a running joke about how we refuse to personally handcraft Valentine’s Day Gift bags for our kids’ classes and just really don’t care if that makes us “less perfect” parents.

Bliss-Moreau: I think the quality/quantity issue discussed by others is really important. But another related issue is figuring out how to divvy up one’s time across tasks for “optimal” productivity and which ideas to pursue. I’m still in the process of figuring out how I want to spend my time in terms of what work I do, both with regard to types of work (papers, grants, advocacy documents) and also what questions I ask. I think striking the right balance is hard, likely changes across career stages (at least that’s been true for me so far), and also shifts as the norms of the academy shift. I try to balance work in my group in terms of core questions that we have a good sense we’ll be able to answer and the high-risk/high-reward work that we all love, but ultimately can be risky in terms of not turning into papers and grants. I’ve struggled with this balance a lot and we have just recently begun to pursue the high-risk work – work that I’ve been explicitly told is “crazy” or “too far out there” or “unlikely to pay off.” The story that I have about this is that I waited until I was tenured and had multiple big grants to fund the group, so the risk of failure was somehow less. But, I see men in similar positions to where I was pre-tenure and pre-R01, chasing ideas on which they are getting similar feedback; this makes me wonder if the difference between me and them might be related to gendered stereotypes about brilliance (discussed above)? Regardless, pursuing some of the “out there” ideas has led my group to a burst of productivity, even with regards to advancing some of our more incremental and “less sexy” work. It hadn’t occurred to me until we dove in that the risk of pursuing the high-risk work might be mitigated by productivity on low-risk work and that overall productivity would increase (and we’d have a lot more fun) if we were doing more high-risk work. The important thing for me as a mentor in this vein is to make sure that the high- and low-risk work is distributed across trainees so that no one runs the risk of not having success during their training phase.

4.3 Impact and Financial Remuneration

Finally, let’s deal with scientific impact and financial remuneration. Even controlling for productivity, women have less impact in Psychology in terms of citations and are paid less (Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al., 2021). As we review in Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al. (2021) and Reference Lindquist, Gruber, Schleider, Beer, Bliss-Moreau and WeinstockLindquist et al. (2020), there are longstanding and consistent biases in citation rates that cause men’s papers to be more highly cited than women’s. Some of this is driven by men’s relative greater tendency to self-cite (Reference King, Bergstrom, Correll, Jacquet and WestKing et al., 2017). Other recent evidence finds that in neuroscience, not only do men self-cite more, but they also cite other men more than they do women (meanwhile, women cite both men and women equally; Reference Dworkin, Linn, Teich, Zurn, Shinohara and BassettDworkin et al., 2020).

Other evidence again points to systemic, interpersonal, and intrapersonal processes that impact both women’s and men’s behaviors around publishing in the journals most likely to be impactful. In Psychology, as the impact of a journal increases, the prevalence of women authors on the papers published there decreases linearly (Reference Odic and WojcikOdic & Wojcik, 2020). This may be due to systemic factors: for instance, if women are spending relatively more of their time on other tasks like service, teaching, or childcare at home, they might have less time to take a risk and “aim high” by submitting first to a top-tier journal and then revising their manuscript afresh for each new submission if it doesn’t get in.

Women might also be less likely to submit to high-impact journals because they might have had bad experiences there in the past. The findings showing that editor gender is predictive of the gender of published authors (Reference Bareket-Shavit, Goldie, Mortenson and RobertsBareket-Shavit et al., in preparation) is suggestive that women might have systematic difficulty publishing at certain journals. Note that to our knowledge, research has not addressed how journal impact, editorial gender, and women’s publication rates interact. That said, stereotypes that women are less “brilliant” than men (Reference Leslie, Cimpian, Meyer and FreelandLeslie et al., 2015) could contribute to implicit bias against women’s findings at top journals. At least in Psychology, many editors are not blind to author names (and presumed gender), even if reviewers are. Finally, women may not publish in the top-tier journals that are most impactful because of small but stable gender differences in intrapersonal processes such as self-esteem or self-promotion (see Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al., 2021). Women may simply feel that their work is not important enough to be considered at top-tier journals due to internalized stereotypes that women are less “brilliant” than men or that women should not self-promote.

While the findings on impact are disconcerting, what is particularly concerning is that women are also less recognized financially for their work. As we review in Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al. (2021), women receive on average, 68–99 percent of what men receive in salary as professors. These discrepancies are starkest for full professors, where women at R1 universities make 81 percent of what men make. It is possible to argue that women make less because they are less productive on average – and salaries and raises are based at least in part on merit – but a recent study of New Zealand academics found gender disparities in financial compensation even when comparing equally productive men and women scholars (Reference Brower and JamesBrower & James, 2020). For women on 9-month salaries (typical of US tenure-track jobs), differences in base pay may become exacerbated by differences in grant funding success when grant funds are used to pay 3 months of “summer salary.”

The gender pay gap, as it’s called, is certainly not unique to academia and its mechanisms are hotly debated. Some of the gender pay gap may be linked to systemic factors, such as women’s mobility when applying for and accepting jobs in a wide range of geographical locations. As we discuss above, being in a dual-career partnership may influence women’s desire to apply for tenure-track jobs; even if they do apply for those jobs, it may limit women’s ability to apply broadly/accept any job because heterosexual women are more likely than heterosexual men to put their spouse’s career first in the case of a “two-body problem” (Reference Mason, Wolfinger and GouldenMason et al., 2013). This fact alone could hinder women’s negotiation abilities and ability to seek out the best-paying position.

In addition to systemic factors, there are well-known interpersonal and intrapersonal factors that contribute to the gender pay gap, such as women’s ability to successfully negotiate for themselves during initial offers and to attain retention offers. Again, due to stereotypes of women as communal, negotiation partners such as chairs and deans may be less likely to expect women to negotiate for pay raises either during hiring or retention and may be biased against them if they do engage in negotiation (Reference Amanatullah and MorrisAmanatullah & Morris, 2010). Meta-analyses show that women negotiate less than men do (Reference Kugler, Reif, Kaschner and BrodbeckKugler et al., 2018), either because they are aware of the potential backlash associated with doing so (an interpersonal explanation) or because they are not comfortable advocating for themselves (an intrapersonal explanation). Other studies find that women negotiate just as much as men, but are less likely to have their requests granted (Reference Artz, Goodall and OswaldArtz et al., 2018), suggesting that women may receive feedback over time that negotiation efforts are not worth it. Women are also less likely to receive outside job offers than men, which can result in fewer “retention offers” from their home university that increase salaries over time. This may be because other schools are less likely to seek out women to “poach,” because women are less likely to apply for these jobs, or because their home universities are less likely to put up the money to retain them. It’s worth noting that women are also granted less financial support outside of salaries when compared to men. In the biomedical sciences, men receive 2.5 times more in start-up funds than do women (Reference Sege, Nykiel-Bub and SelkSege et al., 2015), which could alone lead to the discrepancies in productivity and grant receipt that may exacerbate the gender pay gap over time.

Our personal experiences of gender differences in impact and finances are varied. On the one hand, we feel lucky to be well-remunerated for the work that we love doing. On the other hand, we are aware of ways in which there has not been gender equality in our pay and we have experienced “equity adjustments” to our salaries.

Lindquist: As we reviewed above, men self-cite more than women and their impact is increased for doing so. I happen to self-cite a fair amount because I cite my own theoretical approach (which rightfully, is driving my empirical work). Yet I have been told by reviewers that I self-cite too much. I often wonder, do men ever get this comment during review given that we know empirically that they cite themselves more?! I somehow doubt it …

Gruber: Similar to Lindquist, my work falls within a subfield where only a small number of authors do similar work to our lab. In these cases, you’re penalized if you cite your own work. But the alternative – not citing one’s own work to support a claim or next-step study – is inappropriate and even intellectually dishonest. Yet we are encouraged to cite others’ work disproportionately to our own as women. I have wondered what the proposed alternative is – wait for other (male) colleagues to publish their work first for us to cite in place of our own work?

Bliss-Moreau: One of the major challenges that women face is the lack of transparency around salary and remuneration and social norms that suggest that asking others directly about those things is a major faux pas. At least for me, it has been hard to know what to ask for without knowing what is reasonable (perhaps this is a particularly female concern?) and it is here that working for a public university has major benefits. Our salaries are public (although the numbers in the public database are often not perfectly accurate), which provides a solid starting place for negotiation and also provides data by which the institution is held accountable for equity. When I was first hired, I looked up men and women with similar CVs and argued that my starting salary should be increased based on how much those other people were making. I was told no, that I wouldn’t be compensated more, and ultimately signed my offer. A few months later, I got an email indicating that my record had been reviewed as part of an equity review (an internal process that we have to ensure equity in pay and ladder step across faculty) and my salary had been increased – to basically the value for which I’d asked. Had I not had access to data about other people’s salaries, I probably would not have asked. And, had there not been a system in place for accountability, I probably would not have received the salary bump.

Mendle: I’ve had some complicated dialogues over how to spend my start-up or other research funds, even for the most mundane or necessary of purchases. Again, as Lindquist says, it’s easy to find an alternative, non-gendered explanation, yet I’ve sometimes wondered if my male colleagues have had as much pushback over similar issues or if the pushback is phrased in the same way. I’ve repeatedly been asked “are you sure that’s a wise choice?” about everything from participant compensation to the number of computers in my research lab.

5. The Really Bad News

You thought it couldn’t get worse, right? Well, here’s the really bad news: Overt sexual harassment, sexual assault, and racism still exist in academic Psychology (and really everywhere, #MeToo, but there is the sense that psychologists should be better about this given what we study). Recent high-profile lawsuits and resignations (e.g., www.nytimes.com/2019/08/06/us/dartmouth-sexual-abuse-settlement.html; www.sciencemag.org/news/2020/03/university-rochester-and-plaintiffs-settle-sexual-harassment-lawsuit-94-million; www.phillyvoice.com/penn-professor-kurzban-resigns-sexual-misconduct-allegations-relationships-students/) have made clear that gendered power dynamics still exist in some Psychology departments. In most of these alleged cases, men were senior professors and women were junior professors or students, suggesting that gender and power may interact to create an environment that is psychologically manipulative, or even dangerous for women. The problem with these behaviors is that many go unseen by others. As we reviewed in Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al. (2021), climate surveys suggest that anywhere from 28 percent to 60 percent of women ranging from undergraduates to faculty have experienced some form of sexual harassment in an academic setting.

In the same vein, there is increasing acknowledgment that Psychology departments – and the field at large – remains racist and non-inclusive towards BIPOC scholars and others from identities under-represented in STEM (e.g., https://news.stanford.edu/2020/06/24/psychological-research-racism-problem-stanford-scholar-says/). For instance, the majority of Psychology journals fail to report on topics related to race, are not performed by diverse scholars, and do not study diverse populations (Reference Roberts, Bareket-Shavit, Dollins, Goldie and MortensonRoberts et al., 2020). Taken together, sexism and racism make BIWOC especially likely to face hurdles in Psychology.

We outlined some of our anonymized experiences of sexual harassment at the outset of this chapter and a few of us have been willing to bring up certain experiences in our non-anonymized comments. In reality, this advice is best given in person, so please catch us some time and we’ll tell you how we dealt with our various experiences. The long and the short of it is that we hope you do not ever experience sexual harassment, racism, or some other form of discrimination as part of your job. But you may, and so know what your options are for legal recourse (if you decide to go that route), for making sure that your workplace is safe and productive for you, and for how to engage in self-care.

6. What To Do About It?

Lest we leave you feeling demoralized about our field, it is important to note that the problems that women in Psychology face are not unique to Psychology (yes, that was an attempt to make you feel better …!). That is to say, there are certainly fields where it is just as fraught to be a woman, and fields where women are still in the large minority. As the #MeToo movement showed in 2017 and beyond, gender disparities and sexual harassment and violence are still rampant in many places (including workplaces). The backdrop of the recent race protests, Black Lives Matter movements, and #blackintheivory movement should highlight that gender disparities interact with the racism that still persists in the US and the world; the academy is not free of this racism. We believe that the only reason that Psychology’s problems are interesting is because – well, as a field, we should really know better than say, Physics. As a field, we study human behavior and these disparities are a human behavior problem. Some in this field even study topics particularly relevant to these issues such as gender, stereotypes, productivity, family planning, etc. Many of us joined this field because we were interested in increasing people’s well-being or understanding rampant social issues. We know, at least academically, what the issues are and thus should be well positioned to either develop or have the tools to fix these problems.

We suggest a number of evidence-based paths forward in Reference Gruber, Mendle, Lindquist, Schmader, Clark, Bliss-Moreau, Akinola, Atlas, Barch, Barrett, Borelli, Brannon, Bunge, Campos, Cantlon, Carter, Carter-Sowell, Chen, Craske and WilliamsGruber et al. (2021). They include (1) ways that universities and departments can raise awareness and take stock of these disparities among their faculty and students, (2) ways to ensure that women are equally considered for jobs and other career opportunities such as colloquium invitations, (3) ways to increase transparency about finances that predict gender parity, (4) addressing work–family conflict, (5) equalizing service, (6) becoming aware of and confronting gender bias when it occurs, (7) allowing under-represented women to succeed, (8) increasing mentoring opportunities and a sense of belonging for women, and (9) addressing harassment in the workplace. We suggest you have a look at these and think about how you might implement them with your own mentors, collaborators, and department. As you move forward in your career, we hope you will take these with you and help shape the departments that you eventually join as faculty.

We’ll close with our most targeted piece of advice – the thing that you can do to help carry these ideas forward. And that is persist. Survive. And when you can, thrive. Don’t get us wrong, if at some point during your training, you think “this is not for me,” that is OK. Everyone feels that at some point. And if in your heart of hearts, you don’t want to pursue this career, you shouldn’t. Many people (women and men alike) decide that an academic career is not for them and go on to have fulfilling, productive careers elsewhere. But when you feel that way (and you will at some points), take a step back and question why. Reach out to a trusted mentor (heck, reach out to one of us) and we will tell you that this too shall pass.

We know that it can be extremely difficult to be a woman in this field; it may be especially difficult to be a woman who identifies with another under-represented identity in Psychology. But think about it this way – the more diverse our field is, the more people coming up through the ranks will see people like them doing this, and the more likely they will feel that they can do it too. And that diversity will ultimately contribute to better science.

Footnotes

11 An Open Science Workflow for More Credible, Rigorous Research

12 Presenting Your Research

13 Publishing Your Research

1 Preparing a manuscript for publication entails several format requirements, such as print style and size, citations of sources, use of abbreviations, structure of tables and figures, and order in which sections of the article appears. These are detailed in the Publication Manual of the American Psychological Association (APA, 2020b) and are not covered in this chapter.

2 Arguably the most visible case in the past 20 years was the fabricated report that a commonly used vaccine (measles, mumps, rubella) in young children caused autism among well-functioning children (Reference Wakefield, Murch, Anthony, Linnell, Casson, Malik, Berelowitz, Dhillon, Thomson, Harvey, Valentine, Davies and Walker-SmithWakefield et al., 1998). That the data were faked eventually came to light, but not after far-reaching consequences, including an enormous international backlash against vaccines and the unnecessary deaths of many children whose parents refused vaccinations. Antivaccination movements antedate this report, but social media and the Internet permitted this one to spread widely and over an extended period that continues to this day (Reference Yang, Broniatowski and ReissYang et al., 2019).

3 A quantitative measure to evaluate journals is referred to as the “impact factor,” and is based on the frequency with which articles appear in the journal in a given time period (2 years) in proportion to the total number articles published in the journal. An objective quantitative measure of impact has multiple uses for different parties who have interest in the impact of a journal (e.g., libraries making subscription decisions, publisher evaluating the status of a specific journal they have published). Administrators and faculty peers often use impact of the journals in which a colleague publishes as well as how often their work is cited by others among the criteria used for job appointments and promotions in academic rank, and salary adjustments. There has been a strong movement to no longer use the impact factor to evaluate research or merit of an investigator (see Reference AlbertsAlberts, 2013). Impact was not designed to measure that and is subject to all sorts of influences (e.g., that vary by discipline, artifacts of publishing practices of individual journals). Moreover, that impact factor bears little relation to expert views of scientific quality. In 2012, an organization (San Francisco Declaration of Research Assessment, abbreviated as DORA), initiated at a meeting of the American Society for Cell Biology and including many editors and publishers, examined the ways in which journals are evaluated. Among the consequences was agreement that “impact factor” might be useful for the purposes for which it was intended, but not for evaluating the merit of scientific research. Consequently, DORA urged journals and scientific organizations to drop the use of impact factor as an index of quality of the journal or articles in which the journal appears. Now many scientific and professional organizations (>1800 at the time of this writing) and researchers (~15,000) have signed on to this recommendation to not use or flaunt impact factor as an index of quality (https://sfdora.org/read/). Even so, many journals still flaunt their “impact factor” and occasionally researchers promote their own work based on the impact factor of the journal in which their work has appeared. It is important to mention here in case the reader is considering this as a main or major reason for submitting a manuscript to one journal rather than another.

4 Excellent readings are available to prepare the author for the journal review process (The Trial by Kafka, The Myth of Sisyphus by Camus, Inferno by Dante, and Nausea by Sartre). Some experiences (e.g., root canal without an anesthetic, bungee jumping with a cord that does not stretch in any way, income tax audit) also are touted to be helpful because they evoke reactions that mimic those experienced when reading reviews of one manuscript.

5 Thanks to my dissertation committee for letting me quote from their comments.

14 Recommendations for Teaching Psychology

15 Applying for NIH Grants

16 On Being a Woman in Academic Psychology

References

Recommended Reading

Briney, K. (2015). Data management for researchers: Organize, maintain and share your data for research success. Exeter: Pelagic Publishing Ltd.Google Scholar
Christensen, G., Freese, J., & Miguel, E. (2019). Transparent and reproducible social science research: How to do open science. Oakland, CA: University of California Press.Google Scholar
Christensen, G., Wang, Z., Paluck, E. L., Swanson, N., Birke, D. J., Miguel, E., & Littman, R. (2019, October 18). Open science practices are on the rise: The State of Social Science (3S) survey. https://doi.org/10.31222/osf.io/5rksuCrossRefGoogle Scholar
Gernsbacher, M. A. (2018). Writing empirical articles: Transparency, reproducibility, clarity, and memorability. Advances in Methods and Practices in Psychological Science, 1(3), 403414.CrossRefGoogle ScholarPubMed
Kathawalla, U. K., Silverstein, P., & Syed, M. (2021). Easing into open science: A guide for graduate students and their advisors. Collabra: Psychology, 7(1), 18684.CrossRefGoogle Scholar

References

Aczel, B., Szaszi, B., Sarafoglou, A., Kekecs, Z., Kucharský, Š., Benjamin, D., Chambers, C. D., Fisher, A., Gelman, A., Gernsbacher, M. A., Ioannidis, J., Johnson, E., Jonas, K., Kousta, S., Lilienfeld, S. O., Lindsay, S., Morey, C. C., Munafò, M., Newell, B. R., … & Wagenmakers, E. J. (2020). A consensus-based transparency checklist. Nature Human Behaviour, 4(1), 46. https://doi.org/10.1038/s41562-019-0772-6CrossRefGoogle ScholarPubMed
Allen, C., & Mehler, D. M. (2019). Open science challenges, benefits and tips in early career and beyond. PLoS Biology, 17(5), e3000246. https://doi.org/10.1371/journal.pbio.3000246CrossRefGoogle ScholarPubMed
Arslan, R. C. (2019). How to automatically document data with the codebook package to facilitate data reuse. Advances in Methods and Practices in Psychological Science, 2, 169187. https://doi.org/10.1177/2515245919838783CrossRefGoogle Scholar
Aust, F., & Barth, M. (2020, July 7). papaja: Create APA manuscripts with R Markdown. https://github.com/crsh/papajaGoogle Scholar
Bakan, D. (1966). The test of significance in psychological research. Psychological Bulletin, 66(6), 423437. https://doi.org/10.1037/h0020412CrossRefGoogle ScholarPubMed
Benning, S. D., Bachrach, R. L., Smith, E. A., Freeman, A. J., & Wright, A. G. C. (2019). The registration continuum in clinical science: A guide toward transparent practices. Journal of Abnormal Psychology, 128(6), 528540. https://doi.org/10.1037/abn0000451CrossRefGoogle ScholarPubMed
Bishop, D. V. (2020). The psychology of experimental psychologists: Overcoming cognitive constraints to improve research: The 47th Sir Frederic Bartlett Lecture. Quarterly Journal of Experimental Psychology, 73(1), 119. https://doi.org/10.1177/1747021819886519CrossRefGoogle ScholarPubMed
Bosnjak, M., Fiebach, C., Mellor, D. T., Mueller, S., O’Connor, D. B., Oswald, F. L., & Sokol-Chang, R. (2021, February 22). A template for preregistration of quantitative research in psychology: Report of the Joint Psychological Societies Preregistration Task Force. https://doi.org/10.31234/osf.io/d7m5rCrossRefGoogle Scholar
Bradley, J. C., Lang, A. S., Koch, S., & Neylon, C. (2011). Collaboration using open notebook science in academia. In Elkins, S., Lang, A. S., Koch, S., & Neylon, C. (Eds.), Collaborative computational technologies for biomedical research (pp. 423452). Hoboken, NJ: John Wiley & Sons, Inc. https://doi.org/10.1002/9781118026038.ch25CrossRefGoogle Scholar
Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., Grange, J. A., Perugini, M., Spies, J. R., & van ’t Veer, A. (2014). The replication recipe: What makes for a convincing replication? Journal of Experimental Social Psychology, 50, 217224. https://doi.org/10.1016/j.jesp.2013.10.005CrossRefGoogle Scholar
Briney, K. (2015). Data management for researchers: Organize, maintain and share your data for research success. Exeter: Pelagic Publishing Ltd.Google Scholar
Buchanan, E. M., Crain, S. E., Cunningham, A. L., Johnson, H. R., Stash, H., Papadatou-Pastou, M., Isager, P. M., Carlsson, R., & Aczel, B. (2021). Getting started creating data dictionaries: How to create a shareable data set. Advances in Methods and Practices in Psychological Science, 4(1), 110. https://doi.org/10.1177/2515245920928007CrossRefGoogle Scholar
Button, K., Ioannidis, J., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafò, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14, 365376. https://doi.org/10.1038/nrn3475CrossRefGoogle ScholarPubMed
Chambers, C. D. (2013). Registered Reports: A new publishing initiative at Cortex. Cortex, 49(3), 609610. https://doi.org/10.1016/j.cortex.2012.12.016CrossRefGoogle ScholarPubMed
Christensen, G., Freese, J., & Miguel, E. (2019). Transparent and reproducible social science research: How to do open science. Oakland, CA: University of California Press.Google Scholar
Cohen, J. (1994). The earth is round (p<. 05). American Psychologist, 49(12), 9971003. https://doi.org/10.1037/0003-066X.49.12.997CrossRefGoogle Scholar
Corker, K. S. (2016, January 15). PMG Lab – Project template. https://doi.org/10.17605/OSF.IO/SJTYRCrossRefGoogle Scholar
Crüwell, S., & Evans, N. J. (2020, September 19). Preregistration in complex contexts: A preregistration template for the application of cognitive models. https://doi.org/10.31234/osf.io/2hykxCrossRefGoogle Scholar
Da Silva Frost, A., & Ledgerwood, A. (2020). Calibrate your confidence in research findings: A tutorial on improving research methods and practices. Journal of Pacific Rim Psychology, 14, E14. https://doi.org/10.1017/prp.2020.7CrossRefGoogle Scholar
Devezer, B., Navarro, D. J., Vandekerckhove, J., & Buzbas, E. O. (2021). The case for formal methodology in scientific reform. Royal Society Open Science, 8, 200805. https://doi.org/10.1101/2020.04.26.048306CrossRefGoogle ScholarPubMed
Dirnagl, U. (2019). Preregistration of exploratory research: Learning from the golden age of discovery. PLoS Biology, 18(3), e3000690. https://doi.org/10.1371/journal.pbio.3000690CrossRefGoogle Scholar
Errington, T. M. (2019, September 5). Reproducibility Project: Cancer Biology – Barriers to replicability in the process of research. https://doi.org/10.17605/OSF.IO/KPR7UCrossRefGoogle Scholar
Gernsbacher, M. A. (2018). Writing empirical articles: Transparency, reproducibility, clarity, and memorability. Advances in Methods and Practices in Psychological Science, 1(3), 403414. https://doi.org/10.1177/2515245918754485CrossRefGoogle ScholarPubMed
Goldacre, B., Drysdale, H., Dale, A., Milosevic, I., Slade, E., Hartley, P., Marston, C., Powell-Smith, A., Heneghan, C., & Mahtani, K. R. (2019). COMPare: a prospective cohort study correcting and monitoring 58 misreported trials in real time. Trials, 20(1), 116. https://doi.org/10.1186/s13063-019-3173-2CrossRefGoogle Scholar
Flannery, J. E. (2020, October 22). fMRI Preregistration Template. https://osf.io/6juftGoogle Scholar
Flourney, J. C., Vijayakumar, N., Cheng, T. W., Cosme, D., Flannery, J. E., & Pfeifer, J. H. (2020). Improving practices and inferences in developmental cognitive neuroscience. Developmental Cognitive Neuroscience, 45, 100807 https://doi.org/10.1016/j.dcn.2020.100807CrossRefGoogle Scholar
Hagger, M. S., Chatzisarantis, N. L. D., Alberts, H., Anggono, C. O., Batailler, C., Birt, A. R., Brand, R., Brandt, M. J., Brewer, G., Bruyneel, S., Calvillo, D. P., Campbell, W. K., Cannon, P. R., Carlucci, M., Carruth, N. P., Cheung, T., Crowell, A., De Ridder, D. T. D., Dewitte, S., … Zwienenberg, M. (2016). A multilab preregistered replication of the ego-depletion effect. Perspectives on Psychological Science, 11(4), 546573. https://doi.org/10.1177/1745691616652873CrossRefGoogle ScholarPubMed
Harris, C. R., Coburn, N., Rohrer, D., & Pashler, H. (2013). Two failures to replicate high-performance-goal priming effects. PLoS One, 8(8), e72467. https://doi.org/10.1371/journal.pone.0072467CrossRefGoogle ScholarPubMed
Haven, T. L., & Van Grootel, L. (2019). Preregistering qualitative research. Accountability in Research, 26(3), 229244. https://doi.org/10.1080/08989621.2019.1580147CrossRefGoogle Scholar
Haven, T. L., Errington, T. M., Gleditsch, K. S., van Grootel, L., Jacobs, A. M., Kern, F. G., Piñeiro, R., Rosenblatt, F., & Mokkink, L. B. (2020). Preregistering qualitative research: A Delphi study. International Journal of Qualitative Methods, 19, 113. https://doi.org/10.1177/1609406920976417CrossRefGoogle Scholar
Havron, N., Bergmann, C., & Tsuji, S. (2020). Preregistration in infant research – A primer. Infancy, 25(5), 734754. https://doi.org/10.1111/infa.12353CrossRefGoogle ScholarPubMed
Henry, T. R. (2021a, February 26). Data Management for Researchers: Three Tales. https://doi.org/10.31234/osf.io/ga9yfCrossRefGoogle Scholar
Henry, T. R. (2021b, February 26). Data Management for Researchers: 8 Principles of Good Data Management. https://doi.org/10.31234/osf.io/5tmfeCrossRefGoogle Scholar
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524532. https://doi.org/10.1177/0956797611430953CrossRefGoogle ScholarPubMed
Johnson, A. H., & Cook, B. G. (2019). Preregistration in single-case design research. Exceptional Children, 86(1), 95112. https://doi.org/10.1177/0014402919868529CrossRefGoogle Scholar
Kathawalla, U. K., Silverstein, P., & Syed, M. (2021). Easing into open science: A guide for graduate students and their advisors. Collabra: Psychology, 7(1), 18684. https://doi.org/10.1525/collabra.18684CrossRefGoogle Scholar
Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2(3), 196217. https://doi.org/10.1207/2Fs15327957pspr0203_4CrossRefGoogle ScholarPubMed
Kirtley, O. J., Lafit, G., Achterhof, R., Hiekkaranta, A. P., & Myin-Germeys, I. (2021). Making the black box transparent: A template and tutorial for registration of studies using experience-sampling methods. Advances in Methods and Practices in Psychological Science, 4(1), 116. https://doi.org/10.1177/2515245920924686CrossRefGoogle Scholar
Klein, O., Hardwicke, T. E., Aust, F., Breuer, J., Danielsson, H., Mohr, A. H., IJzerman, H., Nilsonne, G., Vanpaemel, W., & Frank, M. C. (2018). A practical guide for transparency in psychological science. Collabra: Psychology, 4(1), 20. https://doi.org/10.1525/collabra.158CrossRefGoogle Scholar
Koessler, R. B., Campbell, L., & Kohut, T. (2019, February 27). Open notebook. https://osf.io/3n964/Google Scholar
Krypotos, A.-M., Klugkist, I., Mertens, G., & Engelhard, I. M. (2019). A step-by-step guide on preregistration and effective data sharing for psychopathology research. Journal of Abnormal Psychology, 128(6), 517527. https://doi.org/10.1037/abn0000424CrossRefGoogle ScholarPubMed
Ledgerwood, A. (2018). The preregistration revolution needs to distinguish between predictions and analyses. Proceedings of the National Academy of Sciences, 115(45), E10516E10517. https://doi.org/10.1073/pnas.1812592115CrossRefGoogle ScholarPubMed
Markowetz, F. (2015). Five selfish reasons to work reproducibly. Genome Biology, 16, 274. https://doi.org/10.1186/s13059-015-0850-7CrossRefGoogle ScholarPubMed
McKiernan, E. C., Bourne, P. E., Brown, C. T., Buck, S., Kenall, A., Lin, J., McDougall, D., Nosek, B. A., Ram, K., Soderberg, C. K., Spies, J. R., Thaney, K., Updegrove, A., Woo, K. H., & Yarkoni, T. (2016). Point of view: How open science helps researchers succeed. eLife, 5, e16800. https://doi.org/10.7554/eLife.16800CrossRefGoogle Scholar
Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46(4), 806834. https://doi.org/10.1037/0022-006X.46.4.806CrossRefGoogle Scholar
Mertens, G., & Krypotos, A.-M. (2019). Preregistration of analyses of preexisting data. Psychologica Belgica, 59(1), 338352. http://doi.org/10.5334/pb.493CrossRefGoogle ScholarPubMed
Mertzen, D., Lago, S., & Vasishth, S. (2021, March 4). The benefits of preregistration for hypothesis-driven bilingualism research. https://doi.org/10.31234/osf.io/nm3egCrossRefGoogle Scholar
Moher, D., Shamseer, L., Clarke, M. Ghersi, D., Liberati, A., Petticrew, M., Shekelle, P., Stewart, L. A., & PRISMA-P Group. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Systematic Reviews, 4(1), 19. https://doi.org/10.1186/2046-4053-4-1CrossRefGoogle ScholarPubMed
Moreau, D., & Wiebels, K. (2021). Assessing change in intervention research: The benefits of composite outcomes. Advances in Methods and Practices in Psychological Science, 4(1), 114. https://doi.org/10.1177/2515245920931930CrossRefGoogle Scholar
Moshontz, H., Campbell, L., Ebersole, C. R., IJzerman, H., Urry, H. L., Forscher, P. S., … Chartier, C. R. (2018). The Psychological Science Accelerator: Advancing psychology through a distributed collaborative network. Advances in Methods and Practices in Psychological Science, 1(4), 501515. https://doi.org/10.1177/2515245918797607CrossRefGoogle ScholarPubMed
Navarro, D. (2019, January 17). Prediction, pre-specification and transparency [blog post]. https://featuredcontent.psychonomic.org/prediction-pre-specification-and-transparency/Google Scholar
Nelson, L. D., Simmons, J., & Simonsohn, U. (2018). Psychology’s renaissance. Annual Review of Psychology, 69, 511534. https://doi.org/10.1146/annurev-psych-122216-011836CrossRefGoogle ScholarPubMed
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., Buck, S., Chambers, C. D., Chin, G., Christensen, G., Contestabile, M., Dafoe, A., Eich, E., Freese, J., Glennerster, R., Goroff, D., Green, D. P., Hesse, B., Humphreys, M., … Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 14221425. https://doi.org/10.1126/science.aab2374CrossRefGoogle ScholarPubMed
Nuijten, M. B., Hartgerink, C. H., Van Assen, M. A., Epskamp, S., & Wicherts, J. M. (2016). The prevalence of statistical reporting errors in psychology (1985–2013). Behavior Research Methods, 48(4), 12051226. https://doi.org/10.3758/s13428-015-0664-2CrossRefGoogle ScholarPubMed
Obels, P., Lakens, D., Coles, N. A., Gottfried, J., & Green, S. A. (2020). Analysis of open data and computational reproducibility in Registered Reports in psychology. Advances in Methods and Practices in Psychological Science, 229237. https://doi.org/10.1177/2515245920918872CrossRefGoogle Scholar
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716CrossRefGoogle Scholar
Page, M. J., Shamseer, L., & Tricco, A. C. (2018). Registration of systematic reviews in PROSPERO: 30,000 records and counting. Systematic Reviews, 7(1), 32. https://doi.org/10.1186/s13643-018-0699-4CrossRefGoogle ScholarPubMed
Paul, M., Govaart, G., & Schettino, A. (2021, March 1). Making ERP research more transparent: Guidelines for preregistration. https://doi.org/10.31234/osf.io/4tgveCrossRefGoogle Scholar
Peikert, A., & Brandmaier, A. M. (2019, November 11). A reproducible data analysis workflow with R Markdown, Git, Make, and Docker. https://doi.org/10.31234/osf.io/8xzqyCrossRefGoogle Scholar
Platt, J. R. (1964). Strong inference. Science, 146(3642), 347353. https://www.jstor.org/stable/1714268CrossRefGoogle ScholarPubMed
Roettger, T. B. (2021). Preregistration in experimental linguistics: Applications, challenges, and limitations. Linguistics, 59, 12271249. https://doi.org/10.31234/osf.io/vc9huCrossRefGoogle Scholar
Rouder, J. N. (2016). The what, why, and how of born-open data. Behavior Research Methods, 48(3), 10621069. https://doi.org/10.3758/s13428-015-0630-zCrossRefGoogle ScholarPubMed
Rouder, J. N., Haaf, J. M., & Snyder, H. K. (2019). Minimizing mistakes in psychological science. Advances in Methods and Practices in Psychological Science, 2(1), 311. https://doi.org/10.1177/2515245918801915CrossRefGoogle Scholar
Shamseer, L., Moher, D., Clarke, M., Ghersi, D., Liberati, A., Petticrew, M., Shekelle, P., Stewart, L. A., & the Group, PRISMA-P. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: Elaboration and explanation. BMJ, 350, g7647. https://doi.org/10.1136/bmj.g7647CrossRefGoogle ScholarPubMed
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 13591366. https://doi.org/10.1177/0956797611417632CrossRefGoogle ScholarPubMed
Smaldino, P. E., & McElreath, R. (2016). The natural selection of bad science. Royal Society Open Science, 3(9), 160384. https://doi.org/10.1098/rsos.160384CrossRefGoogle ScholarPubMed
Soderberg, C. K. (2018). Using OSF to share data: A step-by-step guide. Advances in Methods and Practices in Psychological Science, 1(1), 115120. https://doi.org/10.1177/2515245918757689CrossRefGoogle Scholar
Spellman, B. A. (2015). A short (personal) future history of revolution 2.0. Perspectives on Psychological Science, 10(6), 886899. https://doi.org/10.1177/1745691615609918CrossRefGoogle Scholar
Stodden, V., Seiler, J., & Ma, Z. (2018). An empirical analysis of journal policy effectiveness for computational reproducibility. Proceedings of the National Academy of Sciences, 115, 25842589. https://10.1073/pnas.1708290115CrossRefGoogle ScholarPubMed
Tackett, J. L., Brandes, C. M., Dworak, E. M., & Shields, A. N. (2020). Bringing the (pre)registration revolution to graduate training. Canadian Psychology/Psychologie canadienne, 61(4), 299309. https://doi.org/10.1037/cap0000221CrossRefGoogle Scholar
Tenney, E., Costa, E., Allard, A., & Vazire, S. (2021). Open science and reform practices in organizational behavior research over time (2011 to 2019). Organizational Behavior and Human Decision Processes, 162, 218223. https://doi.org/10.1016/j.obhdp.2020.10.015CrossRefGoogle Scholar
Topor, M., Pickering, J. S., Barbosa Mendes, A., Bishop, D. V. M., Büttner, F. C., Elsherif, M. M., Evans, T. R., Henderson, E. L., Kalandadze, T., Nitschke, F. T., Staaks, J. P. C., van den Akker, O., Yeung, S. K., Zaneva, M., Lam, A., Madan, C. R., Moreau, D., O’Mahony, A., Parker, A., … Westwood, S. J. (2021, March 5). An integrative framework for planning and conducting Non-Interventional, Reproducible, and Open Systematic Reviews (NIRO-SR). https://doi.org/10.31222/osf.io/8gu5zCrossRefGoogle Scholar
van ’t Veer, A. E., & Giner-Sorolla, R. (2016). Pre-registration in social psychology – A discussion and suggested template. Journal of Experimental Social Psychology, 67, 212. https://doi.org/10.1016/j.jesp.2016.03.004CrossRefGoogle Scholar
Van den Akker, O., Peters, G.-J. Y., Bakker, C., Carlsson, R., Coles, N. A., Corker, K. S., Feldman, G., Mellor, D., Moreau, D., Nordström, T., Pfeiffer, N., Pickering, J., Riegelman, A., Topor, M., van Veggel, N., & Yeung, S. K. (2020, September 15). Inclusive systematic review registration form. https://doi.org/10.31222/osf.io/3nbeaCrossRefGoogle Scholar
Van den Akker, O., Weston, S. J., Campbell, L., Chopik, W. J., Damian, R. I., Davis-Kean, P., Hall, A. N., Kosie, J., E., Kruse, E., Olsen, J., Ritchie, S. J., Valentine, K. D., van ’t Veer, A., & Bakker, M. (2021, February 21). Preregistration of secondary data analysis: A template and tutorial. https://doi.org/10.31234/osf.io/hvfmrCrossRefGoogle Scholar
Vazire, S. (2018). Implications of the credibility revolution for productivity, creativity, and progress. Perspectives on Psychological Science, 13(4), 411417. https://doi.org/10.1177/1745691617751884CrossRefGoogle ScholarPubMed
Vazire, S., & Holcombe, A. O. (2020, August 13). Where are the self-correcting mechanisms in science? https://doi.org/10.31234/osf.io/kgqztCrossRefGoogle Scholar
Vazire, S., Schiavone, S. R., & Bottesini, J. G. (2020, October 7). Credibility beyond replicability: Improving the four validities in psychological science. https://doi.org/10.31234/osf.io/bu4d3CrossRefGoogle Scholar
Vuorre, M., & Curley, J. P. (2018). Curating research assets: A tutorial on the Git version control system. Advances in Methods and Practices in Psychological Science, 1(2), 219236. https://doi.org/10.1177/2515245918754826CrossRefGoogle Scholar
Wagenmakers, E.-J., Beek, T., Dijkhoff, L., Gronau, Q. F., Acosta, A., Adams, R. B., Albohn, D. N., Allard, E. S., Benning, S. D., Blouin-Hudon, E.-M., Bulnes, L. C., Caldwell, T. L., Calin-Jageman, R. J., Capaldi, C. A., Carfagno, N. S., Chasten, K. T., Cleeremans, A., Connell, L., DeCicco, J. M., … Zwaan, R. A. (2016). Registered Replication Report: Strack, Martin, & Stepper (1988). Perspectives on Psychological Science, 11(6), 917928. https://doi.org/10.1177/1745691616674458CrossRefGoogle Scholar
Weston, S. J., Ritchie, S. J., Rohrer, J. M., & Przybylski, A. K. (2019). Recommendations for increasing the transparency of analysis of preexisting data sets. Advances in Methods and Practices in Psychological Science, 2(3), 214227. https://doi.org/10.1177/2515245919848684CrossRefGoogle ScholarPubMed

References

Baum, N., & Boughton, L. (2016). Public speaking: Managing challenging people and situations. The Journal of Medical Practice Management, 31, 251253.Google ScholarPubMed
Bekker, S., & Clark, A. M. (2018). Improving qualitative research findings presentations. International Journal of Qualitative Methods, 17, 160940691878633. https://doi.org/10.1177/1609406918786335CrossRefGoogle Scholar
Blome, C., Sondermann, H., & Augustin, M. (2017). Accepted standards on how to give a Medical Research Presentation: A systematic review of expert opinion papers. GMS Journal for Medical Education, 34(1).Google ScholarPubMed
Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45, 13041312.CrossRefGoogle Scholar
Collins, J. (2004). Giving a PowerPoint presentation: The art of communicating effectively. Radiographics, 24(4), 11851192. https://doi.org/10.1148/rg.244035179CrossRefGoogle ScholarPubMed
Drotar, D. (2000). Presenting scientific data. In Drotar, D. (Ed.), Handbook of research in pediatric and clinical child psychology (pp. 329345). New York: Plenum.CrossRefGoogle Scholar
Gorham, J., Cohen, S. H., & Morris, T. L. (1999). Fashion in the classroom III: Effects of instructor attire and immediacy in natural classroom interactions. Communication Quarterly, 47, 281299.CrossRefGoogle Scholar
Grech, V. (2018a). WASP (Write A Scientific Paper): Optimisation of PowerPoint presentations and skills. Early Human Development, 125, 5356. doi:10.1016/j.earlhumdev.2018.06.006CrossRefGoogle ScholarPubMed
Grech, V. (2018b). WASP (Write a Scientific Paper): Preparing a poster. Early Human Development, 125, 5759. https://doi.org/10.1016/j.earlhumdev.2018.06.007CrossRefGoogle ScholarPubMed
Grech, V. (2019). Presenting scientific work-news media theory in presentations, abstracts, and posters. Saudi Journal of Anaesthesia, 13(5), S59S62. doi.org/10.4103/sja.SJA_556_18">doi.org/10.4103/sja.SJA_556_18CrossRefGoogle ScholarPubMed
Hoff, R. (1988). I can see you naked: A fearless guide to making great presentations. New York: Universal Press.Google Scholar
Lefor, A. K., & Maeno, M. (2016). Preparing scientific papers, posters, and slides. Journal of Surgical Education, 73(2), 286290. https://doi.org/10.1016/j.jsurg.2015.09.020CrossRefGoogle ScholarPubMed
Regula, J. (2020). How to Prepare Educational Lecture: EAGEN 50 years of experience. Digestive Diseases, 38(Suppl. 2), 100103. doi.org/10.1159/000505324">doi.org/10.1159/000505324CrossRefGoogle ScholarPubMed
Skinner, B. F. (1953). Science and human behavior (pp. 242256). New York: Macmillan Company.Google Scholar
Wellstead, G., Whitehurst, K., Gundogan, B., & Agha, R. (2017). How to deliver an oral presentation. International Journal of Surgery Oncology, 2(6), e25. doi.org/10.1097/ij9.0000000000000025">doi.org/10.1097/ij9.0000000000000025CrossRefGoogle ScholarPubMed
Wilder, C. (1994). The presentations kit: Ten steps for selling your ideas. New York: Wiley & Sons.Google Scholar
Williams, J. B. W. (1995). How to deliver a sensational scientific talk. In Pequegnat, W. & Stover, E. (Eds.), How to write a successful research grant application: A guide for social and behavioral scientists (pp. 171176). New York: Plenum.CrossRefGoogle Scholar
Wolpe, J. (1977). The acquisition, augmentation and extinction of neurotic habits. Behaviour Research and Therapy, 15, 303304.CrossRefGoogle Scholar
Zerwic, J. J., Grandfield, K., Kavanaugh, K., Berger, B., Graham, L., & Mershon, M. (2010). Tips for better visual elements in posters and podium presentations. Educational Health (Abingdon), 23(2), 267273.CrossRefGoogle ScholarPubMed

References

Aalbersberg, I. J., Appleyard, T., Brookhart, S., Carpenter, T., Clarke, M., Curry, S., Dahl, J., DeHaven, A., Eich, E., Franko, M., Freedman, L., Graf, C., Grant, S., Hanson, B., Joseph, H., Kiermer, V., Kramer, B., Kraut, A., Karn, R. K., … Vazire, S. (2018, February 15). Making science transparent by default; Introducing the TOP Statement. https://osf.io/sm78t/?_ga=2.66881262.1762683141.1579096697-214340795.1579096697Google Scholar
Alberts, B. (2013). Editorial: Impact factor distortions. Science, 340, 787.CrossRefGoogle ScholarPubMed
American Educational Research Association. (2006). Standards for reporting on empirical social science research in AERA publications. Educational Researcher, 35, 3340.CrossRefGoogle Scholar
American Psychological Association. (2020a). APA and affiliated journals. Online at www.apa.org/pubs/journalsGoogle Scholar
American Psychological Association. (2020b). Publication manual of the American Psychological Association (7th ed.). Washington, DC: American Psychological Association.Google Scholar
Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report. American Psychologist, 73(1), 325.CrossRefGoogle ScholarPubMed
Association for Psychological Science. (2020). APS journals. On line at www.psychologicalscience.org/publicationsGoogle Scholar
Bailar, J. C. III., & Patterson, K. (1985). Journal of peer review: The need for a research agenda. New England Journal of Medicine, 312, 654657.CrossRefGoogle ScholarPubMed
Benos, D. J., Bashari, E., Chaves, J. M., Gaggar, A., Kapoor, N., LaFrance, M., Mans, R., Mayhew, D., McGowan, S., Polter, A., Qadri, Y., Sarfare, S., Schultz, K., Splittgerber, R., Stephenson, J., Tower, C., Walton, A. G., & Zotov, A. (2007). The ups and downs of peer review. Advances in Physiology Education, 31(2), 145152.CrossRefGoogle ScholarPubMed
Brainard, J. (2020). Articles in ‘predatory’ journals receive few or no citations. Science, 367(6474), 139.CrossRefGoogle ScholarPubMed
Case, L., & Smith, T. B. (2000). Ethnic representation in a sample of the literature of applied psychology. Journal of Consulting and Clinical Psychology, 68, 11071110.CrossRefGoogle Scholar
Cicchetti, D. V. (1991). The reliability of the peer review for manuscript and grant submissions: A cross-disciplinary investigation. Behavioral and Brain Sciences, 14, 119186.CrossRefGoogle Scholar
Cooper, H. (2020). Reporting quantitative research in psychology: How to meet APA style journal article reporting standards (2nd ed.). Washington, DC: American Psychological Association.CrossRefGoogle Scholar
De Los Reyes, A., & Kazdin, A. E. (2008). When the evidence says, “yes, no, and maybe so”: Attending to and interpreting inconsistent findings among evidence-based interventions. Current Directions in Psychological Science, 17, 4751.CrossRefGoogle ScholarPubMed
DeHaven, A. (2017, May 23). Preregistration: A plan, not a prison. Retrieved from https://cos.io/blog/preregistration-plan-not-prison/Google Scholar
Des Jarlais, D. C., Lyles, C., Crepaz, N., & the TREND Group. (2004). Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. American Journal of Public Health, 94, 361366.CrossRefGoogle Scholar
Elson, M., Huff, M., & Utz, S. (2020). Meta science on peer review: testing the effects of study originality and statistical significance in a field experiment. Advances in Methods and Practices in Psychological Science, 3(1), 5365.CrossRefGoogle Scholar
Francis, G. (2012). The psychology of replication and replication in psychology. Perspectives on Psychological Science, 7(6), 585594.CrossRefGoogle ScholarPubMed
Gerber, A., Arceneaux, K., Boudreau, C., Dowling, C., Hillygus, S., Palfrey, T., Biggers, D. R., & Hendry, D. J. (2014). Reporting guidelines for experimental research: A report from the experimental research section standards committee. Journal of Experimental Political Science, 1(1), 8198.CrossRefGoogle Scholar
Gunther, A. (2011). PSYCLINE: Your guide to psychology and social science journals on the web. Retrieved August 2011 from www.psycline.org/journals/psycline.htmlGoogle Scholar
Hawkins, E. (2019, October 21). Journals test the Materials Design Analysis Reporting (MDAR) checklist of shemes and memes community. Blog from Nature.com. http://blogs.nature.com/ofschemesandmemes/author/lizhGoogle Scholar
Ioannidis, J. P. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124.CrossRefGoogle ScholarPubMed
Kazdin, A. E. (2017). Research design in clinical psychology (5th ed.). Boston, MA: Pearson.Google Scholar
Kepes, S., McDaniel, M. A., Brannick, M. T., & Banks, G. C. (2013). Meta-analytic reviews in the organizational sciences: Two meta-analytic schools on the way to MARS (the Meta-Analytic Reporting Standards). Journal of Business and Psychology, 28(2), 123143.CrossRefGoogle Scholar
Kirman, C. R., Simon, T. W., & Hays, S. M. (2019). Science peer review for the 21st century: Assessing scientific consensus for decision-making while managing conflict of interests, reviewer and process bias. Regulatory Toxicology and Pharmacology, 103, 7385.CrossRefGoogle ScholarPubMed
Lehrer, J. (2010). The truth wears off. The New Yorker. Available at www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrerGoogle Scholar
Levelt, Noort, and Committees, Drenth (2012, November). Flawed science: The fraudulent research practices of social psychologist Diederik Stapel. Available at www.commissielevelt.nl/wp-content/uploads_per_blog/commissielevelt/2012/11/120695_Rapp_nov_2012_UK_web.pdfGoogle Scholar
Levitt, H. M., Bamberg, M., Creswell, J. W., Frost, D., Josselson, R., & Suárez-Orozco, C. (2018). Journal article reporting standards for qualitative research in psychology: The APA publications and communications board task force report. American Psychologist, 73, 2646.CrossRefGoogle Scholar
Ma, C., Liu, Y., Neumann, S., & Gao, X. (2017). Nicotine from cigarette smoking and diet and Parkinson disease: A review. Translational Neurodegeneration, 6(1), 18. https://translationalneurodegeneration.biomedcentral.com/track/pdf/10.1186/s40035-017-0090-8CrossRefGoogle ScholarPubMed
Metcalf, J., & Crawford, K. (2016). Where are human subjects in big data research? The emerging ethics divide. Big Data & Society, 3(1), 2053951716650211.CrossRefGoogle Scholar
Miller, L. R., & Das, S. K. (2007). Cigarette smoking and Parkinson’s disease. Experimental and Clinical Sciences International, 6, 9399.Google Scholar
Moher, D., Schulz, K.F., & Altman, D. (2001). The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials. Journal of American Medical Association, 285, 19871991.CrossRefGoogle Scholar
Moonesinghe, R., Khoury, M. J., & Janssens, A. C. J. W (2007). Most published research findings are false – But a little replication goes a long way. PLoS Medicine, 4(2), e28.CrossRefGoogle ScholarPubMed
Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115, 26002606.CrossRefGoogle ScholarPubMed
Pryczak, F. (2017). Writing empirical research reports: A basic guide for students of the social and behavioral sciences (8th ed.). Abingdon: Routledge.Google Scholar
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 13591366.CrossRefGoogle ScholarPubMed
Smith, R. (2006). Peer review: A flawed process at the heart of science and journals. Journal of the Royal Society of Medicine, 99, 178182.CrossRefGoogle ScholarPubMed
Spier, R. (2002). The history of the peer-review process. Trends in Biotechnology, 20, 357358.CrossRefGoogle ScholarPubMed
Stahel, P. F., & Moore, E. E. (2014). Peer review for biomedical publications: We can improve the system. BMC Medicine, 12(1), 179.CrossRefGoogle ScholarPubMed
Suresh, S. (2011). Moving toward global science. Science, 333, 802.CrossRefGoogle ScholarPubMed
Tate, R. L., Perdices, M., Rosenkoetter, U., McDonald, S., Togher, L., Shadish, W., Horner, R., Kratochwill, T., Barlow, D. H., Kazdin, A., Sampson, M., Shamseer, L., & Sampson, M. (2016). The Single-Case Reporting Guideline In BEhavioural Interventions (SCRIBE) 2016: Explanation and elaboration. Archives of Scientific Psychology, 4(1), 1031.CrossRefGoogle Scholar
Thomson Reuters. (2011). Journal search: Psychology. New York: Thomson Reuters. Retrieved August 2011 from http://science.thomsonreuters.com/cgi-bin/jrnlst/jlresults.cgi?PC=MASTER&Word=psychologyGoogle Scholar
Thursby, G. (2011). Psychology virtual library: Journals (electronic and print). Retrieved August 2011 from www.vl-site.org/psychology/journals.htmlGoogle Scholar
Von Elm, E., Altman, D. G., Egger, M., Pocock, S. J., Gøtzsche, P. C., & Vandenbroucke, J. P. (2007). The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: Guidelines for reporting observational studies. Annals of Internal Medicine, 147(8), 573577.CrossRefGoogle ScholarPubMed
Wakefield, A. J., Murch, S. H., Anthony, A., Linnell, J., Casson, D. M., Malik, M., Berelowitz, M., Dhillon, A. P., Thomson, M. A., Harvey, P., Valentine, A., Davies, S. E., & Walker-Smith, J. A. (1998). RETRACTED: Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. The Lancet, 351(9103), 637641.CrossRefGoogle ScholarPubMed
Watanabe, M., & Aoki, M. (2014, January 10). Researcher: Test data falsified in major Alzheimer’s disease project. The Ashai Shimbun. Available online at https://ajw.asahi.com/article/behind_news/social_affairs/AJ201401100085Google Scholar
Web of Science. (2011). 2010 Journal citation reports. New York: Thomson Reuters. Retrieved August 2011 from http://wokinfo.com/products_tools/analytical/jcr/Google Scholar
Yang, Y. T., Broniatowski, D. A., & Reiss, D. R. (2019). Government role in regulating vaccine misinformation on social media platforms. JAMA Pediatrics. Published online September 03, 2019.CrossRefGoogle Scholar
Zimmer, M. (2010). “But the data is already public”: On the ethics of research in Facebook. Ethics and Information Technology, 12(4), 313325.CrossRefGoogle Scholar

References

Bain, K. (2004). How do they prepare to teach? In What the best college teachers do (pp. 4867) Cambridge, MA: Harvard University Press.Google Scholar
Brookfield, S. (1995). Becoming a critically reflective teacher. San Francisco: Jossy-Bass.Google Scholar
Deslauriers, L., Schelew, E., & Wieman, C. (2011). Improved learning in a large-enrollment physics class. Science, 332, 862.CrossRefGoogle Scholar
Diamond, R. M. (2008). Designing and assessing courses and curricula: A practical guide (3rd ed.). San Francisco: Jossey-Bass.Google Scholar
Elbow, P. E. (1986). Embracing contraries. New York: Oxford University Press.Google Scholar
Manning, S., & Johnson, K. (2011). The technology toolbelt for teaching. San Francisco, Jossey-Bass.Google Scholar
Maslow, A. H. (1962). Toward a psychology of being. Princeton, NJ: Van Nostrand.CrossRefGoogle Scholar
Mayer, R. E., Paul, R. P., & Wittrock, M. C. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives (edited complete ed.,Anderson, L. W. & Krathwohl, D. R, Eds.). New York: Longman.Google Scholar
McKeachie, W. J. (1999). Teaching tips: Strategies, research and theory for college and university teachers (10th ed.). Boston, MA: Houghton Mifflin.Google Scholar
Nilson, L. B. (2010). Teaching at its best: A research-based resource for college instructors (3rd ed.). San Francisco: Jossey-Bass.Google Scholar
Steele, C. M. (2011). Whistling Vivaldi: How stereotypes affect us and what we can do. New York: W. W. Norton & Co.Google Scholar
Strayhorn, T. L. (2012). College students’ sense of belonging: A key to educational success for all students. New York: Routledge.CrossRefGoogle Scholar
Wiggins, G. P., & McTighe, J. (2005). Understanding by design (2nd ed.). Alexandria, VA: Association for Supervision and Curriculum Development.Google Scholar
Wulff, D. H. (2005). Aligning for learning: Strategies for teaching effectiveness. Bolton: Anker Publishing.Google Scholar

References

Alper, J. (1993). The pipeline is leaking women all the way along. Science, 260, 409411. doi:10.1126/science.260.5106.409CrossRefGoogle ScholarPubMed
American Psychological Association, Committee on Women in Psychology. (2017). The changing gender composition of psychology: Update and expansion of the 1995 task force report. Retrieved from www.apa.org/pi/women/programs/gender-composition/task-force-report.pdfGoogle Scholar
Amanatullah, E. T., & Morris, M. W. (2010). Negotiating gender roles: Gender differences in assertive negotiating are mediated by women’s fear of backlash and attenuated when negotiating on behalf of others. Journal of Personality and Social Psychology, 98, 256267. doi:10.1037/a0017094CrossRefGoogle Scholar
Artz, B., Goodall, A. H., & Oswald, A. J. (2018). Do women ask? Industrial Relations: A Journal of Economy and Society, 57, 611636. doi:10.1111/irel.12214CrossRefGoogle Scholar
Bareket-Shavit, C., Goldie, P. D., Mortenson, E., & Roberts, S. O. (in preparation). Gender Inequality in Psychological Research. Manuscript in preparation.Google Scholar
Brower, A., & James, A. (2020). Research performance and age explain less than half of the gender pay gap in New Zealand universities. PLoS One, 15(1), Article e0226392. doi:10.1371/journal.pone.0226392CrossRefGoogle ScholarPubMed
Carter-Sowell, A. R., Dickens, D. D., Miller, G., & Zimmerman, C. A. (2016). Present but not accounted for: Examining how marginalized intersectional identities create a double bind for women of color in the academy. In Ballenger, J., Polnick, B., & Irby, B. (Eds.), Research on Women and Education Series. Women of color in STEM: Navigating the workforce (pp. 181200). Charlotte, NC: Information Age Publishing.Google Scholar
Ceci, S. J., Ginther, D. K., Kahn, S., & Williams, W. M. (2014). Women in academic science: A changing landscape. Psychological Science in the Public Interest, 15, 75141. doi:10.1177/1529100614541236CrossRefGoogle Scholar
Dworkin, J. D., Linn, K. A., Teich, E. G., Zurn, P., Shinohara, R. T., & Bassett, D. S. (2020). The extent and drivers of gender imbalance in neuroscience reference lists. Nature Neuroscience, 23, 918926. https://doi.org/10.1038/s41593-020-0658-yCrossRefGoogle ScholarPubMed
Geiger, A. W., Livingston, G., Bialik, K. (2019, May 6). Six facts about U.S. moms (Analysis of American Time Use Survey Data). Pew Research Center. Retrieved from www.pewresearch.org/fact-tank/2019/05/08/facts-about-u-s-mothers/Google Scholar
Guarino, C. M., & Borden, V. M. H. (2017). Faculty service loads and gender: Are women taking care of the academic family? Research in Higher Education, 58, 672694. doi:10.1007/s11162-017-9454-2CrossRefGoogle Scholar
Guy, B., & Arthur, B. (2020). Academic motherhood during COVID-19: Navigating our dual roles as educators and mothers. Gender, Work and Organization, 27, 887899. doi.org/10.1111/gwao.12493">doi.org/10.1111/gwao.12493CrossRefGoogle ScholarPubMed
Gruber, J., Mendle, J., Lindquist, K. A., Schmader, T., Clark, L. A., Bliss-Moreau, E., Akinola, M., Atlas, L., Barch, D. M., Barrett, L. F., Borelli, J. L., Brannon, T. N., Bunge, S. A., Campos, B., Cantlon, J., Carter, R., Carter-Sowell, A. R., Chen, S., Craske, M. G., … Williams, L. A. (2021). The future of women in psychological science. Perspectives in Psychological Science, 16, 483516. doi: 10.1177/1745691620952789CrossRefGoogle ScholarPubMed
Hechtman, L. A., Moore, N. P., Schulkey, C. E., Miklos, A. C., Calcagno, A. M., Aragon, R., & Greenberg, J. H. (2018). NIH funding longevity by gender. Proceedings of the National Academy of Sciences, 115, 79437948. doi:10.1073/pnas.1800615115CrossRefGoogle ScholarPubMed
Hengle, E. (2020). Publishing while female. Are women held to higher standards? Evidence from peer review. Retrieved from: www.erinhengel.com/research/publishing_female.pdfGoogle Scholar
King, M. M., Bergstrom, C. T., Correll, S. J., Jacquet, J., & West, J. D. (2017). Men set their own cites high: Gender and self-citation across fields and over time. Socius: Sociological Research for a Dynamic World, 3, 122. doi:10.1177/2378023117738903CrossRefGoogle Scholar
Kugler, K. G., Reif, J. A. M., Kaschner, T., & Brodbeck, F. C. (2018). Gender differences in the initiation of negotiations: A meta-analysis. Psychological Bulletin, 144, 198222. doi:10.1037/bul0000135CrossRefGoogle ScholarPubMed
Leslie, S. J., Cimpian, A., Meyer, M., & Freeland, E. (2015). Expectations of brilliance underlie gender distributions across academic disciplines. Science, 347, 262265. doi:10.1126/science.1261375CrossRefGoogle ScholarPubMed
Lindquist, K. A., Gruber, J., Schleider, J. L., Beer, J. S., Bliss-Moreau, E., & Weinstock, L. (2020, November 25). Flawed data and unjustified conclusions cannot elevate the status of women in science. https://doi.org/10.31234/osf.io/qn3aeCrossRefGoogle Scholar
Magua, W., Zhu, X., Bhattacharya, A., Filut, A., Potvien, A., Leatherberry, R., Lee, Y.-G., Jens, M., Malikireddy, D., Carnes, M., & Kaatz, A. (2017). Are female applicants disadvantaged in National Institutes of Health peer review? Combining algorithmic text mining and qualitative methods to detect evaluative differences in R01 reviewers’ critiques. Journal of Women’s Health, 26, 560570. doi:10.1089/jwh.2016.6021CrossRefGoogle ScholarPubMed
Mason, M. A., Wolfinger, N. H., & Goulden, M. (2013). Do babies matter? Gender and family in the ivory tower. Piscataway, NJ: Rutgers University Press.Google Scholar
Minello, A. (2020). The pandemic and the female academic. Nature. doi.org/10.1038/d41586-020-01135-9">doi.org/10.1038/d41586-020-01135-9CrossRefdoi.org/10.1038/d41586-020-01135-9>Google Scholar
Minello, A., Martucci, S., & Manzo, L. K. (2021). The pandemic and the academic mothers: Present hardships and future perspectives. European Societies, 23(Suppl. 1), S82S94. doi.org/10.1080/14616696.2020.1809690">doi.org/10.1080/14616696.2020.1809690CrossRefGoogle Scholar
National Science Foundation, National Center for Science and Engineering Statistics. (2014). Survey of doctorate recipients, 2013. Retrieved from http://ncsesdata.nsf.gov/doctoratework/2013/Google Scholar
Odic, D., & Wojcik, E. H. (2020). The publication gender gap in psychology. American Psychologist, 75, 92103. doi:10.1037/amp0000480CrossRefGoogle ScholarPubMed
Roberts, S. O., Bareket-Shavit, C., Dollins, F. A., Goldie, P. D., & Mortenson, E. (2020). Racial inequality in psychological research: Trends of the past and recommendations for the future. Perspectives on Psychological Science, 15(6), 12951309. doi:10.1177/1745691620927709CrossRefGoogle ScholarPubMed
Sege, R., Nykiel-Bub, L., & Selk, S. (2015). Sex differences in institutional support for junior biomedical researchers. Journal of the American Medical Association, 314, 11751177. doi:10.1001/jama.2015.8517CrossRefGoogle ScholarPubMed
Witteman, H. O., Hendricks, M., Straus, S., & Tannenbaum, C. (2019). Are gender gaps due to evaluation of the applicant or the science? A natural experiment at a national funding agency. The Lancet, 10171, 531540. doi:10.1016/S0140-6736(18)32611-4CrossRefGoogle Scholar
Wood, W., & Eagly, A. H. (2012). Biosocial construction of sex differences and similarities in behavior. In Olson, J. M. & Zanna, M. P. (Eds.), Advances in experimental social psychology (Vol. 46, pp. 55123). New York: Academic Press. doi:10.1016/B978-0-12-394281-4.00002-7Google Scholar
Zimmerman, C. A., Carter-Sowell, A. R., & Xu, X. (2016). Examining workplace ostracism experiences in academia: Understanding how gender differences in the faculty ranks influence inclusive climates on campus. Frontiers in Psychology, 7, Article 753. doi:10.3389/fpsyg.2016.00753CrossRefGoogle ScholarPubMed
Figure 0

Table 11.1 Guides and templates for pre-registration

Figure 1

Figure 12.1 Poster formats.

Figure 2

Table 12.1 Handling difficult questions

Figure 3

Table 12.2 Suggestions for poster presentations

Figure 4

Table 12.3 Oral presentations

Figure 5

Table 12.4 Using audiovisual enhancements

Figure 6

Figure 12.2 Sample poor and good slides for an oral presentation.

Figure 7

Table 13.1 Major questions to guide journal article preparation

Figure 8

Table 15.1 Possible template of steps in your aims section

Figure 9

Table 15.2 Tips by grant section

Figure 10

Table 15.3 Hypothetical grant timeline

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×