Hostname: page-component-586b7cd67f-r5fsc Total loading time: 0 Render date: 2024-11-27T15:42:14.762Z Has data issue: false hasContentIssue false

Research Ethics 101: Dilemmas and Responsibilities

Published online by Cambridge University Press:  27 September 2012

Lee Ann Fujii*
Affiliation:
University of Toronto
Rights & Permissions [Opens in a new window]

Abstract

The emphasis in political science on procedural ethics has led to a neglect of how researchers should consider and treat study participants, from design to publication stage. This article corrects this oversight and calls for a sustained discussion of research ethics across the discipline. The article's core argument is twofold: that ethics should matter to everyone, not just those who spend extended time in the field; and that ethics is an ongoing responsibility, not a discrete task to be checked off a “to do” list. Ethics matter in all types of political science research because most political science involves “human subjects.” Producers and consumers of political science research need to contemplate the ambiguous and oftentimes uncomfortable dimensions of research ethics, lest we create a discipline that is “nonethical,” or worse, unethical.

Type
The Profession
Copyright
Copyright © American Political Science Association 2012 

Mention research ethics to any graduate student or colleague and the response is often a roll of eyes followed by complaints about how burdensome the Institutional Review Board (IRB) process is. “Research ethics” is often synonymous with “IRB.”Footnote 1 This emphasis on what Guillemin and Gillam (Reference Guillemin and Gillam2004) call “procedural ethics” has had ethical implications of its own. It has masked—perhaps even enabled—a neglect of “ethics in practice” in political science. This article corrects this imbalance. Its goal is to provoke a sustained discussion of research ethics across the discipline. Its core argument is twofold: that ethics should matter to everyone, not just those who spend extended time in the field; and that ethics is an ongoing responsibility, not a discrete task to be checked off the researcher's “to do” list.

WHY ETHICS?

Ethics matter in all types of political science research, whether ethnographies or surveys, elite or nonelite interviews, focus groups or field experiments.Footnote 2 Ethics matter because most political science involves “human subjects”; what varies is the distance between researcher and participant.Footnote 3 Thus, consumers as well as producers of political science research need to contemplate the messy, ambiguous, and oftentimes uncomfortable dimensions of research ethics, lest we become complicit in building a discipline that is nonethical, or worse, unethical.

Research ethics matter for the simple reason that social scientists can bring real harm to study participants and collaborators. These harms can be social. A seemingly innocuous question can touch on sensitive issues (Svensson Reference Svensson, Heimer and Thøgersen2006). Harms can be psychological. Insisting on individual (e.g., private) interviews can retraumatize vulnerable respondents (Bell Reference Bell, Smyth and Robinson2001, 185). Harms can also be physical. Research assistants can be subject to arrest, detention, or imprisonment (du Toit Reference du Toit1980, 277–78; Paluck Reference Paluck, Sriram, King, Mertus, Martin-Ortega and Herman2009) and study participants can be subject to government reprisals. In one notable case of physical harm, anthropologist Georges Condominas (Reference Condominas1973, 4) found, to his horror, that the United States Department of Commerce, without his authorization and hence in direct violation of international copyright law, had translated Condominas's book on Vietnam from French to English and distributed copies to Green Beret soldiers fighting in Vietnam. Condominas learned of this from one of his study participants whom US Special Forces had tracked down and tortured.

Despite the gravity of harms that research projects can pose directly or indirectly, discussions tend to focus more on procedural ethics—how to secure IRB approval—than on ethics in practice. Of those who have gone beyond discussions of procedural ethics, their contributions have been all too brief, usually encapsulated within larger topics such as field research or teaching (Carapico Reference Carapico2006; Kier Reference Kier2003; MacLean Reference MacLean2006; Romano Reference Romano2006; Woliver Reference Woliver2002).

Scholars working in conflict zones have gone further, examining the ethical challenges specific to war or postwar settings (Dauphinée Reference Dauphinée2007; Wood Reference Wood2006). Wood (Reference Wood2006, 380–81), for example, conducted fieldwork in El Salvador during that country's civil war and discusses the challenges of obtaining consent. To ensure maximal protection, she gave her interviewees a menu of options. Participants could decide what they wanted to tell Wood, specify whether it was for publication or her knowledge only, and state whether she could write down what they told her. Carpenter (Reference Carpenter2012) expands the discussion to reflect cogently on how academic and disciplinary norms affect treatment of research participants not only in the field, but also in lectures and published work outside the field.

Recent edited volumes on interpretive methods (Yanow and Schwartz-Shea Reference Yanow and Schwartz-Shea2007) and ethnography (Schatz Reference Schatz2009) explore additional topics such as the importance of reflexivity in all facets of research and power relations between researcher and participant. Bayard de Volo (Reference Bayard de Volo and Schatz2009, 231), for example, writes about her status as a powerful outsider studying an organization of women who had lost sons in the Contra war in Nicaragua in the 1970s and 1980s. When a debate arose about enlarging the group's membership, she was careful to remain neutral. Pachirat (Reference Pachirat2011) also enjoyed high status in his research site, a slaughterhouse in middle America, but chose not to disclose his status so that he could gain an unfettered look at the daily realities of the “kill floor.”

As this brief survey shows, scholars have faced a variety of ethical challenges and have responded in different ways. By what measure should we judge their actions and choices?

WHOSE ETHICS?

The guidelines that American universities use are the Belmont Report of 1979 and IRB Guidebook of 1993.Footnote 4 These documents put forth three guiding principles for ethical research. The first is “respect for persons” or treating people as ends, not means. This involves (among other things) obtaining voluntary and informed consent from participants. The second is “beneficence,” which refers to the researcher's duty to maximize benefits and minimize harm. This principle generally concerns participants' rights to privacy and confidentiality. The third is “justice,” which refers to the way in which researchers select participants. The process should be “fair” and not slanted toward any group out of convenience or personal prejudice.

While these principles are sound, they do not always translate into ethical research for several reasons. First, no set of rules can cover every “ethically important moment” that arises in the field (Guillemin and Gillam Reference Guillemin and Gillam2004). Second, compliance can be meaningless. Signed consent forms have little value if participants do not understand to what they were consenting (du Toit Reference du Toit1980, 280; Guillemin and Gillam Reference Guillemin and Gillam2004, 269; Ross Reference Ross2005, 100). Third, researcher ethics can conflict with participant ethics. Destruction of data might be ethical from the researcher's standpoint but constitute a harm in the eyes of participants.Footnote 5 For these reasons and more, the responsibility to act ethically rests ultimately on the individual researcher.

In the following sections, I examine some common ethical dilemmas that researchers have faced. I draw mainly from the literature in anthropology and sociology and my own experiences examining political violence in rural Rwanda. Though many of the foregoing examples focus on “vulnerable” populations,Footnote 6 similar issues can arise with other kinds of participants and other kinds of research.

DILEMMAS OF POWER

One of the major sources of ethical dilemmas is the power imbalance between researcher and researched. The magnitude and direction of asymmetry will vary by project, researcher, and research setting; so, too, will the areas where the researcher or participant exerts more power over the process. The existence of a power imbalance, however, bears on one of the central responsibilities that researchers have—to obtain voluntary, informed consent.

The term “consent” presumes that the person's participation in the study is voluntary—that the person has not been pressured, coerced, or tricked into participating. The term “informed” indicates that the person understands all the dimensions of the study and particularly those that involve him or her directly, including possible risks and benefits he or she may incur as a result of participating. Only by being fully informed, the logic goes, can a person give proper consent.

Few would argue with the need for obtaining consent. It is at the core of ethically responsible research. Real world conditions, however, often call into question the way in which institutional rules conceive of this most basic task. Problems start with the instrument of consent, which automatically casts the relationship between researcher and participant in “contractual” terms (Ross Reference Ross2005). The researcher does all the “informing;” the participant can only agree or decline. By its design, the consent protocol positions the researcher as someone who already knows more about the participant's world than the participant—a highly improbable scenario when researchers are strangers in the other's world.Footnote 7

The researcher spells out what protections are in place to minimize harm. Many standard protections are deeply embedded in American technologies and conceptions of what constitutes “security.” Researchers generally promise to protect participants' identities and confidentiality by encrypting computer files, storing laptops under lock and key, and destroying data after completion of study. The researcher also provides participants with the phone number of the researcher's home institution should participants have any questions about their rights.

Many of these “protections” do little to protect participants from potential harms. Those determined to gain access (e.g., state officials), can still confiscate researchers' work. Similarly, encrypting electronic data does not guard against rumors or the actions of angry or jealous neighbors. In places where people do not own their own phones or cannot afford phone calls, providing a phone number to the researcher's institution does little to ensure study participants' rights. Even if participants are able to make long-distance calls, many will never avail themselves of this protection for reasons of language, background, or bureaucratic fluency.

Although no amount of protection will guard against all possible risks, researchers should think seriously about what protections they can offer that are meaningful to study participants and appropriate for the context. Researchers should first look to their local contacts (colleagues, friends, assistants, interlocutors) for insight into what risks (and benefits) might matter to local people. Researchers should continue to ask such questions throughout their time in the field as an added check that they are doing all they can to protect participants from possible risks because the nature and source of risks (and protections) can change over time.

Having obtained verbal or written consent, the researcher may still need to think about what consent means—and does not mean—in the local context. People consent for all kinds of reasons. Many people, for example, find it hard to say no to a person in authority. Even in the United States, study participants have signed consent forms because they felt they could not refuse (Gray Reference Gray1975, 128, cited by Cassell Reference Cassell1980, 29–30). Goduka (Reference Goduka1990) faced this issue while researching black child development under apartheid in South Africa. The people in her research site were not highly educated; the community suffered from neglect by state officials and support programs; and political conditions systematically relegated blacks to inferior living conditions, medical care, and schooling. As Goduka (Reference Goduka1990, 333) explains:

How does a researcher secure informed consent when more than half of the subjects in her study are illiterate and not familiar with research enterprises? Such people may think that refusing to participate would create problems for them and their children. On the other hand, agreeing to participate may reflect their submission to the school, or to the researcher, who represent authority in the eyes of black families.

As Goduka points out, people may consent not only because of social pressure, but also because they believe that establishing a relationship with the researcher will be beneficial in and of itself. In extremely poor, marginalized, or illiterate communities, people may view the researcher as a valuable patron—someone who can provide tangible benefits, such as financial aid, legal assistance, and job referrals. People may acknowledge (through their consent) that they understand they will receive no payment, but may still count on benefits yet to come. In Rwanda, I interviewed prisoners in 2004 who had been imprisoned on charges of genocide. Despite a lengthy verbal consent statement where I explained there would be no payment of any kind for their time, many continued to ask for my help with the processing of their legal files.

In 2009, when I returned to Rwanda to begin interviews for a new project, many of the prisoners I had interviewed in 2004 had been released. I went to talk to several of them at their homes. Once again, despite a lengthy, verbal consent protocol that included all the usual caveats about there being no payment for their participation, many continued to ask for various forms of assistance. One asked for help finding a job, another for help paying restitution to genocide survivors whose property had been stolen during the genocide. Another told me bluntly that he expected quid pro quo “assistance” because I had published a book from the information he and his fellow prisoners had provided and was surely making money from that book.

Goduka's study participants in South Africa also saw the mere presence of Goduka in their community as the single biggest benefit of participating. Participants viewed her as a “savior”—this, despite explicit language in her IRB-approved consent forms that there were few, if any, benefits from participating in her study. As Goduka (Reference Goduka1990, 333) explains: “They were expecting benefits from the researcher, regardless of what the form said.”

How should researchers deal with the ambiguities of consent? Because “yes” can mean many things, researchers should ask themselves why people might be consenting and what expectations they might have. Are people consenting because they hope to benefit from the relationship? If so, what are the benefits they are counting on? Are they consenting because they feel they cannot say “no” to someone they view as socially superior? Are they saying “yes” because they do not understand what the researcher is asking of them? Although researchers working in communities that are poor, politically repressed, or socially fractured need to be particularly sensitive to what their presence might mean to potential informants—what promises or hopes their presence implies—researchers working in less-poor or less-fractured communities should also consider what unspoken assumptions might lie behind a “yes” on the consent form. All researchers should continue to ask these questions over time, as new information might require new approaches.

The need to adapt to new information was brought home to me in 2009 when a prisoner I was interviewing explained why it had taken him so long to come for his interview. In Rwanda, prisoners are required to wear a uniform when they leave the interior cell block to enter the main courtyard where visitors, like myself, are allowed to talk to prisoners privately. Many prisoners do not own their own uniform so they borrow one from another prisoner. This prisoner explained that he had had a hard time finding anyone who was willing to lend him a uniform and when he finally did find someone, he had to barter his only meal of the day in exchange.

The dilemma was that even before I could ask this prisoner for his consent, he had already incurred a serious cost. Not all prisoners had this problem (and indeed, this was the first time I had heard of such a situation), but the fact that this prisoner had to pay with his only meal of the day indicates he was among the most vulnerable in the prison's social hierarchy (Tertsakian Reference Tertsakian2008). The two interviews I had with him had thus cost the man two meals and judging by his emaciated build, the price was one he could ill afford. Although it is possible he would have made the same choice had he known the cost beforehand, I do not believe he was in a position to make that choice freely.

The fact that I had already interviewed this man twice raised a new ethical issue. Having already (albeit unwittingly) harmed this man, was I obligated to make restitution in some way? If so, how was I supposed to do this? Prison authorities would not have allowed me to feed him during an interview; nor would they have allowed me to pay him in kind (by providing him a uniform, for example). Even if prison officials had allowed me to bring him food or a uniform, I could not be assured that whatever items I brought would have actually reached the man, because those in power (from prison authorities to prisoner capos) control access to all goods coming into the prison. In all likelihood, someone more powerful would have confiscated anything I brought (Tertsakian Reference Tertsakian2008).

If I found a way to solve this particular dilemma, how should I proceed going forward? Should I only interview prisoners who own their own uniforms (or those who do not have to give up their only meal of the day to borrow someone else's)? If so, how would I find out this information beforehand? Conversely, if I exclude the most socially marginalized prisoners, would I not be depriving them of an opportunity to make contact with a powerful outsider?

Like most ethical moments in the field, there is no way to resolve all tensions through one set of actions. Attempts at resolving one dilemma can create new ones. I never did address this issue in 2009 and my nonactions had ethical implications as well. Knowing that I would be returning to Rwanda, however, allowed me to continue to think of ways to vitiate the harm. Contemplating dilemmas over time can lead to new ways or opportunities to address them.

DILEMMAS OF PROXIMITY

Just as power hierarchies give rise to dilemmas around consent, the researcher's social proximity to participants and interlocutors can give rise to dilemmas about how to maintain people's privacy and confidentiality. When researchers insert themselves, even as observers, they alter the social landscape. They do so by drawing attention to those in their orbit, such as key informants, research assistants, and interpreters. Such attention may be a boon to some, but a threat to others.

While conducting fieldwork in Rwanda in 2004, I split my time between two research sites. My activities followed a predictable rhythm and my presence was always conspicuous. Because of how conspicuous I was, I quickly realized it would be impossible to hide the identities of those I talked to and visited. Instead, what I tried to ensure was that my proximity (or perceived proximity) to certain people would not draw unwanted attention to them. As Bourgois (Reference Bourgois1990, 44) explains, one of the most basic ethical considerations for field researchers is being “wary of the social disapproval foisted on our primary informants when they become the objects of envy or ridicule from the rest of the community because of the resources, prestige, or shame we [researchers] heap on them.” I was hoping that my recurring presence was not bringing shame, envy, or suspicion on anyone. My familiarity with stories of Rwandans who had acted on jealousies or resentments by imprisoning others (even family members) made me ever vigilant about how my proximity to interviewees affected their relationships with other family members and neighbors.

Although I knew I could not hide the identities of those I visited, I believed I could act in ways that could help to dispel the worst suspicions and minimize jealousies. The strategy my interpreter and I used was one of transparency in everything we did and said. We introduced ourselves to local officials. We gave detailed descriptions of the project to all those we interviewed. We showed up when we said we would and sent messages if we could not make a scheduled appointment. We tried to act in ways that matched what we told people we were doing. Aligning words and deeds is particularly crucial in postconflict settings where suspicion of outsiders runs especially high (Peritore Reference Peritore1990; Sluka Reference Sluka1990).

We also strove to protect identities whenever we could. The rare times, for example, when people asked how we obtained their name, my interpreter always replied that it was the local official who gave us their name. Because local officials are responsible for knowing everything that goes on in their communities, this response was always plausible although it was not always the truth. Sometimes we knew of people through documents or other interviews. Not telling people the truth about how we obtained their names was a necessary lie to preserve others' confidentiality. Here, I weighed the cost of deception against the potential harm of disclosure and judged deception to be the lesser harm.

Did my attempt to behave transparently succeed at minimizing potential harms to my informants? I had the opportunity to raise this question when I returned to my two sites in July 2008, nearly four years after my original fieldwork. I wanted to visit people whom I had come to know well, but political conditions in 2008 determined the level of access I would have.

In the first research site, my interpreter and I began by presenting ourselves to the local authority. The man balked at our letter of authorization, then assigned a minder to escort us. Our minder was a young man who was an ardent booster for his community and clearly ambitious. Unlike many Rwandans who resort to speaking Kinyarwanda when another Rwandan is present, this young man always spoke directly to me in French. This struck me as less a courtesy and more a desire to demonstrate his level of education.

Our minder asked us whom we wanted to see. I mentioned a man I had talked with many times in 2004, but he was not at home. We continued to the house of another man named “Jean Marie,”Footnote 8 whom I had also interviewed several times in 2004. When we arrived, Jean Marie and I recognized each other immediately. He invited us inside and we sat in a large, unfinished room.

After initial pleasantries, I had to work hard to think of things to say. I did not want to ask anything of substance because I knew that our minder was precisely the kind of ambitious local authority who would report to his superiors all that we had discussed. I then remembered a conversation Jean Marie and I had had in 2004 about the fact that he had only one child and was not planning on having any more. I recounted my memory of this exchange and then asked Jean Marie if he still had only one child. He smiled and said, “Yes.” We laughed at his advanced—nay, radical—attitude about family planning in a country that has had one of the highest birth rates in the world. At that point, I decided to end the visit. I told my interpreter I did not want to mention other names to our minder because it would make these people too visible. Visibility in this context constituted a potential risk.

When I travelled to my other research site (located in a different part of the country), I was able to talk more freely with the people I visited. One of the first questions I asked was whether anything bad had happened to them as a result of having talked with me in 2004. No one mentioned any problems. It is certainly possible that in the context of a social visit—and one that was occurring after a four-year absence—people would not have brought up any problems they had had, but, given our history together, I felt fairly confident that they would have.

Although I was mostly focused on the potential risks my presence brought to those I visited, it is not always the case that a researcher's proximity brings unwelcome attention. Wood (Reference Wood2006) attributes her ability to conduct field research during the ongoing war in El Salvador in the late 1980s and early 1990s to people's desire that their story be recorded and transmitted to the outside world. Bourgois (Reference Bourgois1990, 48), too, writes about the positive reaction he received when visiting a Salvadoran refugee camp in Honduras to explore whether it would be feasible to do research there. Far from viewing his presence as endangering them, the refugees saw it as an added layer of security and urged Bourgois to stay.

Even in situations where the researcher's presence is welcome, what constitutes ethical choices is not clear-cut. Just as his presence would have brought added security to the refugees, so, too, would Bourgois's absence have left them suddenly vulnerable. The extra layer of security would have disappeared with him. This is not to argue that researchers should refrain from inserting themselves into threatening or insecure research settings. It is to underscore that the researcher's presence in a research community, even when community members view it as desirable, can present harms as well as benefits. These harms may be mostly social in nature, but they may also lead to more serious (physical) consequences. Researchers should consider the implications of their presence throughout their term in the field and continue to take note of local political and social conditions as these shift and change.

DILEMMAS OF PUBLICATION

Ethical issues do not end when researchers leave the field. They continue through the phase of publication and dissemination of written work. In this phase, professional incentives and ethical obligations can pull in different directions. Researchers must publish to advance careers but how far do they need to go to protect confidentiality and privacy?

In my written work, I tried to do my best to hide identities. I worked from a level of near paranoia that some ambitious Rwandan civil servant would comb through the pages of my book to figure out exactly where I was doing my research and to whom I spoke. I tried to take all the precautions I could think of to hide the identities of those whose stories appear in my published work. I used pseudonyms for people and places and deleted or altered references to landmarks that could help identity my research sites (Fujii Reference Fujii2006, Reference Fujii2008, Reference Fujii2009, Reference Fujii2010).

I developed my own rules for choosing pseudonyms. I used French names because Rwandans of a certain age usually have French first names. I picked names that did not approximate the person's real name in any way but were still common in Rwanda. Scholars should choose pseudonyms carefully, as thinly disguised names can allow for easy identification. Ellis (Reference Ellis1995), for example, chose names that sounded similar to the person's real name to make it easier for her to keep both sets of names straight. But this also made it easy for her participants to recognize themselves in the pages of her book. Given the intimate details many shared with Ellis, some ended up feeling betrayed by what she wrote.

In addition to using pseudonyms, I also withheld or obscured biographical details when doing so did not detract from analysis. Instead of referring to “Richard, father of five,” for example, I might refer to Richard as the father of “several” children. I chose not to suppress or alter biographical details that related to the book's argument, however. If an informant had been a local authority during the genocide or had close ties to a local authority, I felt I could not change that information, because the fact of such ties was germane to the argument of my book.

Although I took as many precautions as I could think of, there was no way to obscure identities entirely—to the point where even local residents (should they read the book or have it read to them) would not be able to recognize themselves or others. Rwanda also is such a small and intensively administered country that if officials wanted to know to whom I talked, they could easily find out.Footnote 9

In spite of my desire to take as many precautions as possible, I still, at times, placed professional priorities above ethical responsibilities. I did not alter or omit details when those details were important to my argument. Looking back, I believe I should have weighed these choices more carefully or at least been more aware of the choice I was making. “Mindful ethics” should always serve as a counterweight to professional incentives (González-López Reference González-López2011).

During the publication stage, researchers should also anticipate how various audiences might use their published work and take necessary precautions. We cannot control what others do with our work, but we are obligated to think through what risks we introduce when we publish certain data. Even with participants' consent, some scholars still choose to withhold certain data to minimize harm. Wood (Reference Wood2006), for example, decided not to publish some of her data from El Salvador out of fear that publication of these materials could bring risks to her informants (even though her informants had told Wood she could use the data in her published work). Her instincts turned out to be correct. As Wood (Reference Wood2006, 382n14) writes: “My caution was recently confirmed when a review of my second book appeared in an issue dedicated to understanding insurgency of a publication (Special Warfare, December 2004) of the US Army's John F. Kennedy Special Warfare Center and School, which may well be read by Salvadoran military officers.” As Wood's experience underscores, our ethical obligations can take on new and very high stakes during publication; researchers need to take these new (potential) risks and harms as seriously as those that arise in earlier stages.

FINAL THOUGHTS

Ethical research begins not with an IRB-approved protocol but researchers' commitment to engage with difficult issues over time. Although actual dilemmas will be specific to the researcher, project, and setting, some do's and don't's apply generally. First, do start to think about ethical issues at the design phase of your project. Do not think of ethics only in terms of IRB (or institutional) approval. Second, after you begin field research, do seek ongoing input from local people about what they consider to be “risks,” “harms,” “benefits,” and “protections.” Do not assume you know more than they. Third, when faced with competing priorities, remain mindful of your duty to minimize harm to participants as well as local research assistants and interlocutors. Do not default to what is good for your career or what is sure to impress your dissertation committee. Fourth, remember that ethical research often involves making difficult trade-offs that do not necessarily leave you, the researcher, feeling better about yourself. Do not try to avoid making these trade-offs, but be clear about why you are making them. Fifth, never assume that you are no longer accountable because your project is exempt from full IRB review. Ignoring ethics is itself an (un)ethical move.

If scholars and graduate students are uncomfortable with navigating the many ethical challenges that arise when conducting research with human beings, we must remind ourselves that to enter another's world as a researcher is a privilege, not a right. Wrestling with ethical dilemmas is the price we pay for the privileges we enjoy. It is a responsibility, not a choice, and, when taken seriously, it may be one of the most important benefits we have to offer those who make our work possible.

ACKNOWLEDGMENTS

I wish to thank John Donaldson, Timothy Pachirat, Peregrine Schwartz-Shea, Dean Sharpe, Lahra Smith, Robin Turner, Dvora Yanow, and two anonymous reviewers for their helpful comments. I also wish to thank Aarie Glas for his able research assistance and the US Institute of Peace and George Washington University for generous funding to support fieldwork in Rwanda.

Footnotes

1 The act that mandated the creation of IRBs was the National Research Act of 1974 (Yanow and Schwartz-Shea Reference Yanow and Schwartz-Shea2007, 8). The IRB is the official body at American universities charged with reviewing and approving the “ethics protocol” of all research involving “human subjects.” In Canada, the equivalent body is the Research Ethics Board. Similar bodies exist in universities in Europe and elsewhere. The IRB review process is highly bureaucratized and can be extremely burdensome. For a discussion of the effectiveness of IRBs in political science, see Carapico (Reference Carapico2006). For a broader discussion of reforms to IRB rules and procedures, see Bosk and de Vries (Reference Bosk and de Vries2004).

2 For space reasons, I am unable to discuss ethical issues that pertain to all these methods, but specific literatures exist. See, for example, Raffe et al. (Reference Raffe, Bundell, Bibby and Burgess1989) on surveys and Paluck (Reference Paluck, Sriram, King, Mertus, Martin-Ortega and Herman2009) on field experiments.

3 I thank Timothy Pachirat for this point.

5 See, for example, the maps that Wood's (Reference Wood2003, 48) participants created. Wood returned the maps to their creators rather than destroying or keeping them.

6 I use this term with caution because “vulnerable” groups are not always weak or unknowing (Hemming Reference Hemming, Sriram, King, Mertus, Martin-Ortega and Herman2009).

7 Even in controlled laboratory experiments, where the researcher supposedly controls the conditions, he or she may still be blind to potential risks and harms. See the Stanford Prison Experiment (http://www.prisonexp.org/).

8 All names are pseudonyms to protect identities.

9 Discussant comments by Alison Des Forges, African Studies Association annual meeting, 16 November 2008, Chicago, IL.

References

Bayard de Volo, Lorraine. 2009. “Participant Observation, Politics, and Power Relations: Nicaraguan Mothers and U.S. Casino Waitresses.” In Political Ethnography: What Immersion Contributes to the Study of Power, ed. Schatz, E.. Chicago: University of Chicago Press.Google Scholar
Bell, Pam. 2001. “The Ethics of Conducting Psychiatric Research in War-Torn Contexts.” In Researching Violently Divided Societies, ed. Smyth, M. and Robinson, G.. Tokyo: United Nations University Press.Google Scholar
Bosk, Charles I., and de Vries, Raymond G.. 2004. “Bureaucracies of Mass Deception: Institutional Review Boards and the Ethics of Ethnographic Research.” Annals of the American Academy of Political and Social Science 595: 249–63.Google Scholar
Bourgois, Philippe. 1990. “Confronting Anthropological Ethics: Ethnographic Lessons from Central America.” Journal of Peace Research 27 (1): 4354.Google Scholar
Carapico, Sheila. 2006. “No Easy Answers: The Ethics of Field Research in the Arab World.” PS: Political Science and Politics 39 (3): 429–31.Google Scholar
Carpenter, Charli. 2012. “‘You Talk of Terrible Things So Matter-of-Factly in This Language of Science’: Constructing Human Rights in the Academy.” Perspectives on Politics 10 (2): 363–83.Google Scholar
Cassell, Joan. 1980. “Ethical Principles for Conducting Fieldwork.” American Anthropologist 82 (1): 2841.Google Scholar
Condominas, Georges. 1973. “Distinguished Lecture 1972: Ethics and Comfort: An Ethnographer's View of His Profession.” In Annual Report: American Anthropological Association.Google Scholar
Dauphinée, Elizabeth. 2007. The Ethics of Researching War: Looking for Bosnia. Manchester: Manchester University Press.Google Scholar
du Toit, Brian M. 1980. “Ethics, Informed Consent, and Fieldwork.” Journal of Anthropological Research 36 (3): 274–86.Google Scholar
Ellis, Carolyn. 1995. “Emotional and Ethical Quagmires in Returning to the Field.” Journal of Contemporary Ethnography 24 (1): 6898.Google Scholar
Fujii, Lee Ann. 2006. “Rescuers and Killer-Rescuers During the Rwandan Genocide: Rethinking Standard Categories of Analysis.” Paper read at Pratiques de sauvetages en situations génocidaires, at Centre d'histoire de Sciences politiques, Paris.Google Scholar
Fujii, Lee Ann. 2008. “The Power of Local Ties: Popular Participation in the Rwandan Genocide.” Security Studies 17: 130.Google Scholar
Fujii, Lee Ann. 2009. Killing Neighbors: Webs of Violence in Rwanda. Ithaca, NY: Cornell University Press.Google Scholar
Fujii, Lee Ann. 2010. “Shades of Truth and Lies: Interpreting Testimonies of War and Violence.” Journal of Peace Research 47 (2): 231–41.Google Scholar
Fujii, Lee Ann. 2011. “Rescuers and Killer-Rescuers during the Rwanda Genocide: Rethinking Standard Categories of Analysis.” In Resisting Genocide: The Multiple Forms of Rescue, ed. Semelin, Jacque, Andrieu, Claire, and Gensburger, Sarah. Trans. by Bentley, Emma and Schoch, Cynthia, 145–57. New York: Columbia University Press.Google Scholar
Goduka, Ivy N. 1990. “Ethics and Politics of Field Research in South Africa.” Social Problems 37 (3): 329–40.Google Scholar
González-López, Gloria. 2011. “Mindful Ethics: Comments on Informant-Centered Practices in Sociological Research.” Qualitative Sociology 34 (3): 447–61.Google Scholar
Gray, Bradford H. 1975. Human Subjects in Medical Experimentation: A Sociological Study of the Conduct and Regulation of Clinical Research. New York: John Wiley & Sons.Google Scholar
Guillemin, Marilys, and Gillam, Lynn. 2004. “Ethics, Responsibility, and ‘Ethically Important Moments’ in Research.” Qualitative Inquiry 10 (2): 261–80.Google Scholar
Hemming, Judy. 2009. “Exceeding Scholarly Responsibility: IRBs and Political Constraints.” In Surviving Field Research: Working in Violent and Difficult Situations, ed. Sriram, C. L., King, J. C., Mertus, J. A., Martin-Ortega, O., and Herman, J.. London: Routledge.Google Scholar
Kier, Elizabeth. 2003. “Designing a Qualitative Methods Syllabus.” Qualitative Methods 1 (1): 24–6.Google Scholar
MacLean, Lauren Morris. 2006. “The Power of Human Subjects and the Politics of Informed Consent.” Qualitative Methods 4 (2): 1315.Google Scholar
Pachirat, Timothy. 2011. Every Twelve Seconds: Industrialized Slaughter and the Politics of Sight. New Haven, CT: Yale University Press.Google Scholar
Paluck, Elizabeth Levy. 2009. “Methods and Ethics with Research Teams and Ngos: Comparing Experiences across the Border of Rwanda and Democratic Republic of Congo.” In Surviving Field Research: Working in Violent and Difficult Situations, ed. Sriram, C. L., King, J. C., Mertus, J. A., Martin-Ortega, O., and Herman, J.. London: Routledge.Google Scholar
Peritore, N. Patrick. 1990. “Reflections on Dangerous Fieldwork.” The American Sociologist (Winter): 359–72.Google Scholar
Raffe, David, Bundell, Ivor, and Bibby, John. 1989. “Ethics and Tactics: Issues Arising from an Educational Survey.” In Ethics of Educational Research, ed. Burgess, R. G.. London: Falmer Press.Google Scholar
Romano, David. 2006. “Conducting Research in the Middle East's Conflict Zones.” PS: Political Science and Politics 39 (3): 439–41.Google Scholar
Ross, Fiona C. 2005. “Codes and Dignity: Thinking about Ethics in Relation to Research on Violence.” Anthropology South Africa 28 (2-4): 99107.Google Scholar
Schatz, Edward, ed. 2009. Political Ethnography: What Immersion Contributes to the Study of Power. Chicago: University of Chicago Press.Google Scholar
Sluka, Jeffrey A. 1990. “Participant Observation in Violent Social Contexts.” Human Organization 49 (2): 114–26.Google Scholar
Svensson, Marina. 2006. “Ethical Dilemmas: Balancing Distance with Involvement.” In Doing Fieldwork in China, ed. Heimer, M. and Thøgersen, S.. Honolulu: University of Hawai‘i Press.Google Scholar
Tertsakian, Carina. 2008. Le Château: The Lives of Prisoners in Rwanda. London: Arves Books.Google Scholar
Woliver, Laura R. 2002. “Ethical Dilemmas in Personal Interviewing.” PS: Political Science and Politics 35 (1): 677–78.Google Scholar
Wood, Elisabeth Jean. 2003. Insurgent Collective Action and Civil War in El Salvador. Cambridge: Cambridge University Press.Google Scholar
Wood, Elisabeth Jean. 2006. “The Ethical Challenges of Field Research in Conflict Zones.” Qualitative Sociology 29 (3): 373–86.Google Scholar
Yanow, Dvora, and Schwartz-Shea, Peregrine. 2007. Institutional Review Boards and Field Research. Paper read at Annual Meeting of the American Political Science Association, at Chicago, IL.Google Scholar
Yanow, Dvora, and Schwartz-Shea, Peregrine, eds. 2006. Interpretation and Method: Empirical Research Methods and the Interpretive Turn. Armonk, NY: M.E. Sharpe.Google Scholar