Hostname: page-component-586b7cd67f-dsjbd Total loading time: 0 Render date: 2024-11-23T22:23:54.875Z Has data issue: false hasContentIssue false

“We're Off to Replace the Wizard”: Lessons from a Collaborative Group Project Assignment

Published online by Cambridge University Press:  12 June 2012

Miguel Centellas
Affiliation:
University of Mississippi
Gregory J. Love
Affiliation:
University of Mississippi
Rights & Permissions [Opens in a new window]

Abstract

This article examines the effectiveness of a collaborative group learning project for teaching a core competency in comparative politics: constitutional structures. We use a quasi-experimental design and propensity score matching to assess the value of a constitutional writing group project and presentation. The results provide strong evidence that these learning tools are highly valuable for teaching abstract concepts. Students who participated in the project scored significantly higher on a short series of questions in final exams given several weeks after the completion of the group project. Somewhat paradoxically, the project increased competency but did not affect student self-reported interest in the subject matter. The challenges and improvements that can be made for the use these types of learning tools concludes the article.

Type
The Teacher
Copyright
Copyright © American Political Science Association 2012

Questions about how to improve student understanding of core concepts drive a growing interest in pedagogical innovation in undergraduate political science education. This article explores one such approach: collaborative group project assignments. Using a quasi-experimental design, we empirically test whether this approach improved student learning and, if so, by how much. In discussions about various kinds of “active,” “cooperative,” or “team-based” learning, arguments focus on why these approaches better “engage” students or otherwise improve the educational experience (Johnson, Johnson, and Smith Reference Johnson, Johnson and Smith1991; Michaelson, Knight, and Fink Reference Michaelson, Knight and Fink2002; Millis and Cottell Reference Millis and Cottell1998). These approaches embrace a range of pedagogical styles, including “active reading” (Daley Reference Daley1995), using film or other audiovisual components to “enhance” learning (Ulbig Reference Ulbig2009), and engaging students through blogging and social media tools (Lawrence and Dion Reference Lawrence and Dion2010). A much larger literature focuses on simulations.Footnote 1 This article looks at one example of cooperative and collaborative learning: a small and underdiscussed subset of the “active” learning pedagogy. To date, we have found only one reference to the use of cooperative learning in political science (Occhipinti Reference Occhipinti2003), which offered a descriptive guide to its pedagogical application. Although we have used this (and the broader “cooperative” or “collaborative” literature) as a guide, this article is particularly interested in empirically testing a core claim made by proponents of cooperative and collaborative learning: cooperative or collaborative learning activities improve student learning.

Our test emerged from a particular interest in determining whether a specific collaborative group project assignment (which one of the authors had previously used) was an effective tool for learning differences in democratic political institutions. The assignment—in which students, working in small groups over several weeks, draft a constitutional framework for a fictional country—had been used in two previous upper-level courses on democratization. Those experiences suggested collaborative assignments helped students develop a working understanding of the range of institutional engineering choices available to emerging democracies. We were interested in testing whether the assignment could be adapted to an introductory-level course in comparative politics at a large public university. In our experience teaching introductory courses, students have difficulty understanding unfamiliar democratic institutional systems, such as unitary systems, proportional representation, or other forms of nonplurality voting, as well as parliamentary or semipresidential systems. We hoped that by using a collaborative learning assignment students would master these difficult concepts. We planned to test whether the assignment met that goal.

COOPERATIVE TEAM LEARNING AND COLLABORATIVE LEARNING

Our collaborative group project fits within the broader categories of “active” and “experiential” learning (Johnson, Johnson, and Smith Reference Johnson, Johnson and Smith1991; Michaelson, Knight, and Fink Reference Michaelson, Knight and Fink2002; Millis and Cottell Reference Millis and Cottell1998). The term “active learning” describes any teaching approach that transcends the traditional lecture environment, which is frequently described as a “passive” mode of learning. Active learning pedagogy describes a variety of techniques, such as staging classroom debates, fostering small group activities, or making students engage in illustrative “games.” The term “experiential learning” describes approaches that take active learning further by providing students with experiences that either mirror “real world” conditions (simulations) or offer direct experience in a real world setting (internships, field-site activities) or shift student attitudes or perceptions by exposing them to new environments (often a key element of “service” learning). In terms of pedagogy, both approaches engage students in learning through practical, “hands-on” application of material and concepts. Within these two broad categories are two specific pedagogical approaches that could describe our project: cooperative team learning and collaborative learning. Although the project we describe here more closely fits the cooperative team learning approach than the collaborative learning approach, we call our project a “collaborative group project” because of significant distinctions from both approaches.

Cooperative team learning is “highly structured” and “entails positive interdependence and student accountability” (Occhipinti Reference Occhipinti2003, 69). Unlike traditional kinds of small-group, in-class activities, a key distinguishing feature of cooperative team learning is that groups work together over a longer period. Additionally, students are assessed (i.e., graded) both collectively (which fosters interdependence) and individually (which fosters accountability). Both elements are critical: collective assessment requires students to cooperate and help each other master learning goals and successfully execute projects; individual assessment reduces the incentive for students to become “free riders.” Another key feature of cooperative team learning is that teachers closely monitor groups and provide requisite “social training” or some framework of rules and group roles. In many ways, cooperative team learning is an extended form of group learning, but with static group membership that extends beyond a single class period (whether only a few weeks or a full semester).

Collaborative learning, in contrast, specifically demands limited teacher guidance to allow groups to interpret, question, challenge, or even critically “break down” larger frameworks of understanding. Derived from feminist pedagogy, collaborative learning allows students to interpret, critique, deconstruct, or create their own frameworks of understanding. This approach is well suited for engaged, critical thinking—particularly when the goal is to tackle more normative, subjective, or theoretical issues, such as justice, freedom, and democracy.

Cooperative forms of learning are well suited to help students master relevant material, functioning as “study groups.” The assumptions underlying cooperative learning are that students share responsibilities and help each other master common material using consciously constructed incentive structures (collective assessment) to ensure that stronger students help weaker students. Collaborative learning is better suited to develop students' critical thinking skills because students work on projects that do not have a priori “solutions.” In addition, by requiring limited teacher oversight, collaborative learning fosters student independence and provides space for student exploration.

We designed our project to blend some of the assumptions and goals of cooperative and collaborative learning approaches. We wanted students to work in groups to collaborate on a project with no objective “solution” (there was no “correct” constitution for our fictional country). Thus, students brought their own normative values or assumptions about human nature to the group, which then collectively developed an understanding of the assignment. However, we did not want our students to stray too far from the basic concepts we wanted them to understand (executive type, electoral system, federal-unitary model). Thus, we expected our students to work together to “learn” how to distinguish between institutional models.

THE ASSIGNMENT: A CONSTITUTIONAL DESIGN FOR OZ

In our collaborative group project students worked in small groups to draft a constitutional framework for the Land of Oz.Footnote 2 Each group consisted of five to seven students and had three to four weeks to work collaboratively to produce two final products: a short paper (six to seven pages) and an oral presentation (7 to 10 minutes) to the class. Students organized their group's efforts independently with little direct involvement from their instructor. Students knew that each group's final paper and presentation would receive a grade, which would become the “base” grade for each group member.Footnote 3

Although both instructors were experienced faculty members we chose two treatment courses because of their structural differences.Footnote 4 We wanted to see if the group assignment would have similar effects on two otherwise structurally dissimilar courses. Class A was a midsized lecture class with 54 students; Class B was a composite class of on-campus students (24) and off-campus students (7) who met simultaneously via video link. Both classes met twice a week and were primarily lecture-driven with some discussion. Additional similarities and differences are listed in table 1.

Table 1 Comparison of Comparative Politics Classes

The project began midway through the semester: students were placed randomly into groups and given the assignment's guidelines. Then, students received a dossier that included a map of Oz and a brief description of the fictional country's political geography, demographic divisions, and recent history.Footnote 5 We made it clear that students were expected to use the information in the dossier and refer to it throughout the project. Their task was not to design an “ideal” constitutional framework, but rather one that was tailored to the needs and realities of Oz.

Students were required to specifically consider three primary questions, each of which had to be addressed in the final written and oral reports:

  1. (1) What kind of executive system should Oz adopt (presidential, parliamentary, or semipresidential)?

  2. (2) What kind of electoral system should Oz adopt (first-past-the-post, proportional representation, or other)?

  3. (3) Should Oz adopt a federal or unitary system?

In addition to these questions, students could address other issues they thought important. Students addressed a wide variety of issues. Although these were included in our overall assessment of their group product, we were primarily interested in the three primary questions, on which we focus in this article.

Although we adopted different textbooks for our respective classes, both texts covered these fundamental topics. Class A used Patrick O'Neil's Essentials of Comparative Politics, which covers all three topics in Chapter 5 (“Democratic Regimes”). Class B used Carol Ann Drogus and Stephen Orvis's Introducing Comparative Politics, which covers the topics in Chapters 6 (“Political Institutions: Governing”) and 7 (“Political Institutions: Participation and Representation”). Despite their differences, both texts provide similar discussions of electoral systems, executive type, and federalism to give students a baseline for their research.

Each of us also pursued a slightly different pedagogical approach throughout the semester. Class A combined the textbook with the ancillary Cases in Comparative Politics (O'Neil et al. Reference O'Neil, Fields and Share2009), covering 13 different countries during the semester, and in-class economic games for teaching basic political economy concepts. Class B combined the textbook with a series of 10 thematic additional “texts” (including audio podcasts, video, and online interactive content) that were paired with short analytical writing assignments (students chose to write on any three of the “optional” assignments, plus one required assignment, for a total of four).

ASSESSING THE PROJECT'S EFFECTIVENESS: A QUASI-EXPERIMENTAL DESIGN

The primary goal of the collaborative assignment was for students to learn about three key dimensions of democratic institutional design: executive-legislative relations, electoral system, and federal-unitary structures. Optimally, we hoped students would complete our classes with an understanding of the costs and benefits of each system. Although students received a grade based on their collaborative effort, we hoped that—through participating in a collaborative project assignment—individual students would meet our minimal expectations of correctly identifying different institutional systems. For measurement validity across three classes, we focused on an assessment of minimal learning outcomes. However, anecdotal evidence suggests that a range of students were able to articulate important differences between different institutional systems, as well as the implications of such differences.

The final exam was our primary instrument for assessing whether students met our minimum expectations. Although each exam reflected material covered in our respective classes, we included four identical multiple-choice questions that specifically asked students to identify different institutional systems along our three dimensions (see the appendix).

To determine whether our collaborative assignment enhanced student learning, students in another class (C; taught by an experienced graduate student instructor under the guidance of one of the authors) answered the same four multiple-choice exam questions as part of their final exam. Although these students did not participate in a collaborative project, they completed the same confidential pretreatment survey as the two “treatment” classes (A and B).Footnote 6

We expected students to perform significantly better in the “treatment” classes. Using three classes allowed us to control for potential instructor, textbook, and pedagogical effects (see table 1) unrelated to the collaborative assignment.Footnote 7 Although it is possible that instructor experience played a factor in our results, two reasons suggest that this was not the case: First, exam and end-of-semester grades for all three classes were generally consistent, suggesting that students completed all three courses with similar mastery of material in other areas of the course. Second, we found no evidence that student evaluation of “teacher quality” had any effect: Both midsized enrollment classes (A and C) had similar teaching evaluations, suggesting differences in student performance on the test indicators were not due to teacher quality effects. Similarly, Class B had significantly different teaching evaluations from Class A, yet Class B showed no similar difference in student performance on the four indicators.Footnote 8

We also controlled for independent variables that could affect our dependent variable (performance on the “institutional” final exam questions) across individual students. Thus, we also distributed a pretreatment questionnaire to all three classes. Although the survey was not anonymous (we needed to match individual survey responses to assignment and test performance), it was confidential. Furthermore, as we told the students, we did not review the survey until after the end of the semester and final grades had been submitted. Our pretreatment questionnaire included demographic questions (age, race, gender, socioeconomic class), nonstandard demographic questions unique to educational environments (disciplinary major), and questions designed to tap into various attitudinal dimensions (ideology, trust, and social communication). See table 2.

Table 2 Demographic Indicators across Comparative Politics Classes

Finally, we were also interested to learn what students thought about working in our collaborative project along different dimensions: student behavior (how often the group met, how many hours were spent on the project, how was work distributed among group members), attitudes towards the project (what aspects of the project were most enjoyable or difficult, what would students change), and whether participating in the collaborative assignment changed interest in the course. We assessed this last dimension by asking students about their level of interest in “comparative politics and the subjects we cover in class.” We also included this question in the posttreatment survey, allowing us to see if attitudes shifted after participating in the collaborative project. Because students in class C did not participate in the collaborative project, they did not take this second (posttreatment) survey. Although this approach limits our ability to explore differences in the interest-level dimension between the treatment and nontreatment cases, we were primarily interested in changes across a semester from a baseline interest in the subject. We hoped that a “fun” collaborative assignment near the end of the semester would generate interest in comparative politics.

ASSESSING THE PROJECT'S EFFECTIVENESS: THE EVIDENCE

Evidence suggests that the collaborative assignment helped students learn about different political institutional designs. On average, students in the treatment classes scored better on the four final exam multiple-choice questions about institutions than students in the nontreatment class—and that difference was statistically significant (table 4).Footnote 9 Additionally, we found no significant difference in test scores between the two treatment classes, suggesting that textbook and pedagogical approaches had no significant impact on student performance.

As table 3 illustrates, students who participated in the group project scored dramatically higher on all four questions from the final exam. The smallest gap between the treatment and nontreatment groups is on the executive type question, yet this gap is still 35 percentage points. The majority of those in the treatment group answered all four questions correctly yet only 12% percent of the nontreatment group did so. As table 4 shows, after taking into account differences between the two groups, the average number correct in the treatment group (Cronbach's α = .76 for both groups) was 3.6, however, in the nontreatment group the number was only 1.1. Participating in a collaborative group project had a significant impact on students' competency of fundamental topics often at the center of comparative politics.

Table 3 Student Performance on Institutions Exam Questions across Classes

Table 4 Treatment Effect of Project Using Propensity Score Matching (N = 99)*

* Average number of correct questions does not match figures in table 3 because of the propensity score matching procedure, which drops outlier cases. Subjects were matched on final class grade, gender, parental education level, hours studied per week, race, and interest in the subject matter.

The project appears to have increased student's understanding of differences in formal constitutional structures, however, this improvement was not uniform. Although the treatment improved students' test results for each of the questions, the effect size was smallest for the first question dealing with executive type. This indicates that American students are likely to have difficulty in understanding the nuances between presidential and parliamentary systems, in particular the differing sources of executive authority/constituency.

An interesting advantage of a posttreatment survey is to review what changes to students' interest in the subject matter occurred, if any, due to the project. Our results indicate that although the treatment had a clear and dramatic effect on students' understanding of constitutional systems, it had no effect on students' interest in comparative politics. Nearly every student rated their interest in the subject matter at the same level in both the pre- and posttreatment surveys.Footnote 10 There was a clear disconnect between the treatment's ability to increase understanding and interests in the subject matter. Again, we saw no effect of student evaluations of “teacher quality” on student interest. Despite a significant difference in student evaluation scores between the two treatment classes, students left the courses with similar interest (or disinterest) in the subject matter.

The posttreatment survey also allows us to examine which students enjoyed the project and what common difficulties students faced in completing the project. In general, students who reported studying more hours per week and had higher grades in the classes were the students who were the least satisfied with the group's product (Pearson's r = −0.23, p < .05). Similarly, students who said someone in the group did more of the work least enjoyed the project (Pearson's r = 0.45, p < .01). These correlations hold if we look at the group average level (Pearson's r = −0.7, p < 0.01 and Pearson's r = −0.52, p < 0.05, respectively). This suggests that the collective nature of the group project—and the collective action problem groups inherently generate—seemed to cause better students to view the quality of the product produced and how the group functioned more negatively. Anecdotal evidence shows that “good” students felt over-burdened and stressed during the project, particularly if one or more group members did not actively participate. To some degree, the postpresentation quiz that allowed students an opportunity for intragroup peer critique helped alleviate some students' concerns regarding issues of equal work—as did our grade adjustment procedure.

Finally, we anticipated that students who expressed higher levels of comfort communicating in social situations (as measured by scale from a battery of questions, Cronbach's α = 0.82) would benefit the most from the treatment. Although higher levels of comfort with social communication were correlated with satisfaction with group product (Pearson's r = 0.23, p < 0.05 at the individual-level; Pearson's r = 0.56, p < 0.01, at the group-mean level), comfort levels were not correlated with the measured learning outcomes.

CONCLUSION

The evidence supports our claim that our collaborative group project assignment was successful to teach core concepts of institutional design to students in introductory-level course in comparative politics. However, we were surprised to find that participating in such a project had no significant effect on students' self-reported interest in political science. Proponents of “active” modes of learning often claim that such approaches better “engage” students. In particular, proponents of “cooperative” and “collaborative” learning strategies often have (implicit or explicit) goals of developing core “civic” values. Our findings suggest that these claims may not hold. We did find that students better learned course material using collaborative group project; but we found no evidence that the experience carried beyond the immediate context or affected student attitudes toward politics or learning.

Based on our reflections over the strengths and weaknesses of the project, we offer the following advice for others attempting similar projects:

  1. (1) Assign groups earlier and use them regularly to encourage students to prepare for other class assignments. This is closer to Occhipinti's (Reference Occhipinti2003) “collaborative team learning” model; this approach might also foster greater bonds of trust between group members in a low-stakes setting prior to working on a much larger graded project.

  2. (2) Set aside class time for groups to meet and work together. One common complaint raised by students was the difficulty of finding time to meet together in their groups. This in-class group time would also allow the instructor to meet with each group, even if briefly, and address any questions, issues, or potential conflicts.

  3. (3) Assign specific—but focused—reading assignments on the core concepts and give periodic quizzes throughout the projects on a specific concept (e.g., executive type and executive-legislative relations). Periodic individual- and group-level assessments (holding groups accountable for the performance of their members) can offer metrics for progress, help clarify questions before the final assignment, and provide regular goalposts for groups (and help avoid group procrastination).

  4. (4) Finally, ask each student to write a brief (one-page) reflection to submit with the final group project. This encourages student accountability, gives each student “ownership” of the project, and provides a venue for any dissatisfied group members.

Cooperative and collaborative learning strategies are effective in helping students learn course materials. From our collaborative group project assignment the evidence suggests a significant boost in student performance after participating in such projects. This result is particularly remarkable considering that our final exams were given nearly a month after the group project was completed. We encourage other colleagues to develop assignments that incorporate these pedagogical approaches.

APPENDIX A

Multiple choice questions used to assess student understanding of the three institutional dimensions:

  1. 1. A parliamentary system is ____.

    1. a. a form of government in which voters select a head of state by direct election, but the legislature also selects a head of government from among its members

    2. b. a form of government in which the legislature selects the head of government, rather than having voters directly elect him or her

    3. c. a form of government in which voters directly elect the head of government in a separate election from the legislature

    4. d. when the president faces an opposition-controlled legislature, which exercises greater powers and reduces the powers of the chief executive

  2. 2. A unitary state is ____.

    1. a. a political system in which the central government has sole constitutional sovereignty and power, with local units merely serving as administrative divisions

    2. b. a political system in which the state's power is legally and constitutionally divided among more than one level of government, which local units exercising significant autonomy

    3. c. a regime in which a single individual or party dominates

    4. d. a regime that mixes elements of democracy with authoritarian rule

  3. 3. A vote of no confidence is ___.

    1. a. what political scientists call mid-term elections in which the party of a sitting president loses a large number of seats

    2. b. a process by which elected officials are censured for misconduct by their peers

    3. c. a procedure in presidential systems that allows the legislature to remove a head of government from power by a simple majority vote

    4. d. a procedure in parliamentary systems that allows the legislature to remove a head of government from power by a simple majority vote

  4. 4. Proportional representation is ____.

    1. a. a legal requirement that reserves legislative seats based on quotas (based on gender or ethnicity) to ensure that minorities receive proportional political representation

    2. b. an electoral system in which individual candidates are elected in single-member districts and the candidate with the most votes wins

    3. c. an electoral system in which seats in the legislature are apportioned so that the share of seats for each party matches its share of the vote

    4. d. an electoral system in which voters rank order their preferred candidates

APPENDIX B

Multivariate regression analysis of effects of project participation (“treatment-effect”) on number of exam scores answered correctly (0–4).

* p < 0.05,

** p < 0.01

As the table shows, there was no significant relationship between performance on the four multiple-choice questions (Appendix A) and attitudinal, demographic, or study habit variables. When controlling for other factors, students in the two treatment classes performed significantly better (translated to two additional correct questions, out of four) across all models.

Footnotes

1 Nearly every issue of PS: Political Science & Politics includes an article (or more) on simulations.

2 Our Oz was derived from the vision of Oz presented in Gregory Maguire's best-selling novel, Wicked. While we did not expect our students to be familiar with Maguire's interpretation, we expect students to have at least a passing familiarity with Frank Baum's Oz, as interpreted over the years in various films and children's books.

3 Individual grades were adjusted up or down, based on a brief postproject quiz that included subjective intragroup peer evaluations and objective questions about the content of their own group's project. This quiz was separate from the pre- and posttreatment surveys and was distributed before the in-class presentations. Students were asked to name the three specific institutions their group chose (students who were not aware of their group's chosen model had their individual score reduced) and name one other person (other than themselves) that had done significant work for the group (those students were rewarded with a grade boost). The purpose of this quiz was to punish free riders and reward the contributions of those who demonstrated greater involvement in the group's project.

4 Class A was taught by Gregory J. Love; Class B was taught by Miguel Centellas. Both are assistant professors with at least three years of teaching experience.

5 Assignment guidelines and information dossier available online at: http://mcentellas.com/teaching/Oz-dossier-pol102.pdf

6 Our pre- and posttreatment surveys—as well as our research design—were approved by our university's Institutional Review Board. It is filed as: “Constitutional Design Simulation” (Protocol 11-031).

7 The structure of the comparison, with two groups within the treatment, takes advantage of the strengths of both most-similar (classes A and C) and most-different (classes A and B) systems designs. Table 1 illustrates the five dimensions along with the three courses were similar and/or different.

8 The generalized (reported) teacher evaluation “score” at our university is based on the question: “How would you rate the instructor's overall performance in this course?” The scores for Class A and C were 2.48 and 2.60 (on a 4-point scale), respectively; the score for Class B was 3.52 (the university mean was 2.97). We should note, of course, that teaching evaluations are not always accurate measures of teacher quality.

9 Statistical tests to determine the treatment effect was conducted using propensity score matching to account for any systemic difference in the makeup of the students in the three classes (Ho et al. Reference Ho, Imai, King and Stuart2007; Rubin Reference Rubin1979). Students in the treatment group were matched with students from the non-treatment group based on socio-demographics, study habits, and subject interest from the pre-test questionnaire. A 3-1 matching procedure was used. Results from a simple means difference test or non-parametric tests are not substantively different.

10 The two surveys were administered approximately five weeks apart.

* p < 0.05,

** p < 0.01

As the table shows, there was no significant relationship between performance on the four multiple-choice questions (Appendix A) and attitudinal, demographic, or study habit variables. When controlling for other factors, students in the two treatment classes performed significantly better (translated to two additional correct questions, out of four) across all models.

References

Daley, Anthony. 1995. “On Reading: Strategies for Students.” PS: Political Science & Politics 28 (1): 89100.Google Scholar
Drogus, Carol Ann, and Orvis, Stephen. 2009. Introducing Comparative Politics: Concepts and Cases in Context. Washington, DC: CQ Press.Google Scholar
Ho, Daniel E., Imai, Kosuke, King, Gary, and Stuart, Elizabeth A.. 2007. “MatchIt: Nonparametric Preprocessing for Parametric Causal Inference.” Political Analysis 15 (3): 199236.Google Scholar
Johnson, David W., Johnson, Roger T., and Smith, Karl A.. 1991. Active Learning: Cooperation in the College Classroom. Edina, MN: Interactive Book.Google Scholar
Lawrence, Christopher N., and Dion, Michelle L.. 2010. “Blogging in the Political Science Classroom.” PS: Political Science & Politics 43 (1): 151–56.Google Scholar
Michaelson, L. K., Knight, A. B., and Fink, L. D., eds. 2002. Team-Based Learning: A Transformative Use of Small Groups. Westport, CT: Praeger.Google Scholar
Millis, Barbara J., and Cottell, Philip G.. 1998. Cooperative Learning for Higher Education Faculty. Phoenix, AZ: American Council on Education & Oryx Press.Google Scholar
O'Neil, Patrick. 2010. Essentials of Comparative Politics, 3rd ed. New York: W. W. Norton.Google Scholar
O'Neil, Patrick, Fields, Karl, and Share, Don. 2009. Cases in Comparative Politics, 3rd ed. New York: W. W. Norton.Google Scholar
Occhipinti, John D. 2003. “Active and Accountable: Teaching Comparative Politics Using Cooperative Team Learning.” PS: Political Science and Politics 36 (1): 6974.Google Scholar
Rubin, Donald B. 1979. “Using Multivariate Matched Sampling and Regression Adjustment to Control Bias in Observational Studies.” Journal of the American Statistical Association 74 (366): 318–28.Google Scholar
Ulbig, Stacy. 2009. “Engaging the Unengaged: Using Visual Images to Enhance Students' ‘Poli Sci 101’ Experience.” PS: Political Science & Politics 42 (2): 385–92.Google Scholar
Figure 0

Table 1 Comparison of Comparative Politics Classes

Figure 1

Table 2 Demographic Indicators across Comparative Politics Classes

Figure 2

Table 3 Student Performance on Institutions Exam Questions across Classes

Figure 3

Table 4 Treatment Effect of Project Using Propensity Score Matching (N = 99)*

Figure 4

* Multivariate regression analysis of effects of project participation (“treatment-effect”) on number of exam scores answered correctly (0–4).