Introduction
Given the limited resources available for conservation it is imperative that funding is channelled towards those approaches that can be proven to be successful (Sutherland et al., Reference Sutherland, Pullin, Dolman and Knight2004; Ferraro & Pattanayak, Reference Ferraro and Pattanayak2006). Considerable effort has recently been devoted to devising frameworks that help to monitor the progress and success of conservation projects (Sutherland et al., Reference Sutherland, Pullin, Dolman and Knight2004; Hockings et al., Reference Hockings, Stolton, Leverington, Dudley and Courrau2006; Pullin & Stewart, Reference Pullin and Stewart2006; Mace et al., Reference Mace, Balmford, Leader-Williams, Manica, Walter, West, Zimmermann, Zimmermann, Hatchwell, Dickie and West2007; Kapos et al., Reference Kapos, Balmford, Aveling, Bubb, Carey and Entwistle2008; Salafsky et al., Reference Salafsky, Salzer, Stattersfield, Hilton-Taylor, Neugarten and Butchart2008) but an issue that still hampers these attempts is the fact that the effects of many conservation projects on target populations or habitats only become measurable well beyond the time frame of the usual project cycle (Kapos et al., Reference Kapos, Balmford, Aveling, Bubb, Carey and Entwistle2008). Long-term monitoring of targeted species or habitats would be desirable but is often not practical. An alternative approach is to identify predictors of success that are both easier to measure and serve as reliable surrogates for long-term data on the changing status of conservation targets.
The Cambridge Conservation Forum (CCF), a network of conservation organizations and researchers based in and around Cambridge, UK (CCF, 2009), has developed a project evaluation tool (Kapos et al., Reference Kapos, Balmford, Aveling, Bubb, Carey and Entwistle2008). This draws on a conceptual framework (Kapos et al., Reference Kapos, Balmford, Aveling, Bubb, Carey and Entwistle2008) that is itself an elaboration of the idea of results chains for conservation interventions (Salafsky et al., Reference Salafsky, Margoluis and Redford2001, Reference Salafsky, Salzer, Stattersfield, Hilton-Taylor, Neugarten and Butchart2008). Together the framework and evaluation tool enable us to investigate the link between intermediate steps (such as activities, outputs and outcomes) and the ultimate success (conservation effects) of a project. Here we use a preliminary dataset to test whether measures of project implementation (as typically reported to donors) are useful predictors of conservation effects, or whether measuring subsequent outcomes is a better alternative in terms of how easily the measures can be derived and how well they predict longer-term effects on conservation targets.
Methods
CCF framework and evaluation tool
The CCF framework and evaluation tool begin with the recognition that conservation projects characteristically involve several different types of activity that may each have different outcomes and appropriate measures of success. Based on existing categorizations of conservation action (Salafsky et al., Reference Salafsky, Margoluis, Redford and Robinson2002, Reference Salafsky, Salzer, Stattersfield, Hilton-Taylor, Neugarten and Butchart2008) and as a result of consultation within CCF, we defined seven broad categories of conservation activity (Table 1) that together encompass most of the work that CCF members and other conservation organizations undertake. Typically conservation projects include several of these activity types (Wilder & Walpole, Reference Wilder and Walpole2008; Kapos et al., in press).
For each activity type the CCF framework provides a conceptual model of the likely relationships between its successful implementation in a project and conservation impact, making explicit the linkages that are often assumed (Kapos et al., Reference Kapos, Balmford, Aveling, Bubb, Carey and Entwistle2008). The models are simplest for species and site management (Fig. 1) and more complex for the other five activity types (e.g. Figs 2 & 3). All models include implementation stages, leading on through a series of outcomes (how an intervention affects the conservation problem of interest), to conservation effect (project-scale effects on the likelihood of persistence of populations or habitats of conservation concern). In all but the two simplest models we identify key outcomes that provide the platform for reducing threats to and/or improving the responses of conservation targets to those threats. For livelihoods-related projects the key outcome is the abandonment of the relevant damaging practices; for policy work, it is the implementation of the policies or legislation promoted; for education and awareness raising, a change in behaviour by the intended audience; for capacity-building, increases in the quantity and/or quality of conservation action; and for research, the application of research results to conservation practice.
The score-card–style evaluation tool developed by CCF has a single questionnaire for each conservation activity type. Each questionnaire is based on the appropriate model and comprised of carefully worded questions that work progressively through implementation and outputs to outcomes and conservation effects (CCF Outputs, 2009). Each question offers four ordered answers reflecting increasing levels of achievement, which can be scored on a 1–4 scale. There are also further unranked options for when insufficient information is available; these are worded to distinguish between cases where relevant information is anticipated and those where information is not being collected at all.
As an example, the question about the key outcome in the questionnaire on livelihoods-related activities is: ‘Has the target audience reduced their use of the damaging practices addressed by the project?’ and the answers offered are: (a) No; (b) Some individuals have reduced their use; (c) Most individuals have reduced their use and/or some have abandoned them entirely; (d) Most individuals have entirely abandoned damaging practices; (e) Too early to tell but information will be available; (f) Not assessed, necessary information neither available nor anticipated. Answers a–d reflect increasing reduction of damaging practices and e and f deal with missing information.
Trial application
To test which type of measures (implementation vs outcome) offers the most practical approach for assessing conservation performance (in terms of how easily they can be derived and how effectively they predict long-term effects), the CCF questionnaires were used by conservation professionals to self-evaluate a set of 60 activities from 26 projects. A few of these individuals were involved in the development of the framework and questionnaire but the majority were not. Their answers to the questionnaire were based on the best available evidence, which ranged from the experience and opinion of project managers, often supported by anecdotal evidence, to quantitative data (e.g. from surveys of agricultural practice and statistics on trade).
We used the completed questionnaires firstly to compare how easy it was to answer questions about implementation, outcome and conservation effect. Secondly, for each evaluated activity, we calculated a mean score for the answers to questions at each of these three levels, restricting the analysis of the outcome level to the section on key outcomes (as defined above). We then used these average scores to test the extent to which measures at the two lower levels (implementation and outcome) were good predictors of the activity's conservation effect. This analysis reduced the sample to the 22 activities for which full information was available for all three levels. However, this reduction in sample size did not introduce bias to the sample: projects with full information did not differ from incomplete projects in their scores for implementation or outcome (medians (ranges) for implementation 3.3 (2.8–3.5) vs 3.5 (3.0–4.0), Wilcoxon test W = 452.5, n = 22 vs 37, P = 0.473; for outcome 3.0 (2.0–3.4) vs 2.5 (2.0–3.0), W = 91.0, n = 22 vs 11, P = 0.245).
Results
As expected, questions were less likely to be answered the further they were along any given model (and thus closer to conservation effect; see Fig. 4 for a detailed example from capacity-building). This was the case irrespective of the type of conservation activity being undertaken (Fig. 5). Questions on key outcomes, despite being less readily answered than those about implementation, were still much easier to answer than those on actual conservation effects.
With regard to what best predicts conservation effects, we found that the degree of implementation of a given activity was a poor predictor of its conservation effect (Spearman rank correlation: ρ = 0.077, n = 22, P = 0.734; Fig. 6a). In contrast, the extent of achievement of key outcomes was a much more powerful predictor of the effect of an intervention, even in this small sample of activities (ρ = 0.668, n = 22, P < 0.001; Fig. 6b). This result is not linked to any tendency in the methodology to promote linkage between adjacent levels; we tested for a close link between implementation and outcome scores and found none (ρ =0.060, P = 0.792).
Discussion
Our results show that measures of key outcomes can potentially serve as powerful predictors of real conservation effects, and thus project success. While assessing outcomes is more difficult than simply reporting on implementation, measures of implementation were unable to predict success and are thus of little value in tracking conservation impacts. This is important, given that most donors have until now typically only required reporting at the level of project implementation. In contrast, because outcomes are influenced by some of the many external factors that affect the linkages between activity and impact, they provide a more reliable basis for assessing project success. Although in future it would be desirable to validate fully this conclusion using a larger sample size, targeting project monitoring and reporting efforts and resources more strongly towards outcome-level measures is likely to improve the assessment of conservation achievements.
The CCF framework and evaluation tool provide a standardized approach to improving on current reporting practice through the identification and assessment of outcomes for different activity types. The CCF tools build on and complement existing approaches for using conservation experience to inform future conservation actions, including The Nature Conservancy's 5-S Framework (TNC, 2003), logical frameworks and results chains (Salafsky et al., Reference Salafsky, Margoluis and Redford2001) and tools for implementing them, such as Miradi (Miradi, Reference Miradi2007), conservation audits based on the CMP's Open Standards (CMP, 2004; O'Neil, Reference O'Neil2007), threat reduction assessment (Salafsky & Margoluis, Reference Salafsky and Margoluis1999), and the scoring and evaluation frameworks developed for assessing agri-environment schemes (Carey et al., Reference Carey, Short, Morris, Hunt, Priscott and Davis2003, Reference Carey, Manchester and Firbank2005), protected area management effectiveness (Hockings et al., Reference Hockings, Stolton, Leverington, Dudley and Courrau2006; Stolton et al., Reference Stolton, Hockings, Dudley, MacKinnon, Whitten and Leverington2008), and interventions by zoos (Mace et al., Reference Mace, Balmford, Leader-Williams, Manica, Walter, West, Zimmermann, Zimmermann, Hatchwell, Dickie and West2007). The CCF tools also complement the growing drive for evidence-based conservation (Sutherland et al., Reference Sutherland, Pullin, Dolman and Knight2004; Sutherland, Reference Sutherland2005) by providing both a means to assess the effects of whole projects as distinct from single management interventions and a way of helping to ensure that the data needed for evidence-based conservation are collected.
So far, practitioners who have used the CCF tools report taking no more than a day to assess a project, making it difficult to argue that it is too time-consuming to conduct such assessments (given that projects normally represent several person-years of effort). Although further validation of our conclusion about the usefulness of assessing key outcomes is desirable, we believe that our results provide clear evidence that such measures have the potential to allow practitioners to predict the downstream effects of their projects, even when these will only be realized beyond the project timeframe.
The dataset presented in this paper is relatively small and it was not possible to test the robustness of our general conclusion for each activity type independently. The dataset is also relatively lacking in unsuccessful projects, mirroring the more general patterns in the availability of information about conservation actions that fail (Redford & Taber, Reference Redford and Taber2000). A larger database of assessed projects would not only allow us to address this issue but also to investigate several other interesting questions, such as whether the success of different activity types depends on particular conditions, which combinations of activities provide the best synergies, and which resource allocation strategies are most successful. We therefore strongly encourage practitioners to take advantage of the CCF tools, which are freely available online (CCF Outputs, 2009). Results presented here and feedback from practitioners who have trialled the tool already suggest this will help organizations plan interventions, monitor their progress, and obtain meaningful, early measures of their long-term effectiveness. Expanding the database of evaluated projects would also help us build a much-needed resource with which to start asking precise and quantitative questions about what determines conservation success.
Acknowledgements
This work was supported by a grant from the John D. and Catherine T. MacArthur Foundation to the Cambridge Conservation Forum via the University of Cambridge. The member organizations of the Cambridge Conservation Forum (see http://www.cambridgeconservationforum.org.uk) generously supported the participation of many of their staff members in the working groups, workshops and discussion events of this project. Members of the Conservation Measures Partnership and the IUCN Programme Evaluation Group provided helpful input. Earthwatch and the BAT Biodiversity Partnership also helped with testing and refining the evaluation tool. The University of Cambridge Department of Zoology and UNEP-WCMC were especially helpful in accommodating project staff and many project meetings. We are grateful to the following individuals, who participated in the project's working groups and discussions and in testing the evaluation tool: W. Adams, M. Aminu-Kano, M. Ausden, E. Ball, S. Barnard, A. Bowley, E. Bowen-Jones, R. Brett, M. Brooke, P. Brotherton, P. Buckley, N. Bystriakova, D. Coomes, B. Cooper, B. Dickson, J. Doberski, E. van Ek, J. Ekstrom, L. Evans, D. Gibbons, M. Green, R. Green, A. Grigg, D. Hawkins, M. Harris, P. Herkenrath, A. Hipkiss, G. Hirons, R. Hossain, F. Hughes, J. Hughes, J. Hutton, C. Ituarte, D. Kingma, P. Laird, A. Lanjouw, P. Lee, C. Magin, T. Milliken, R. Mitchell, D. Noble, S. O'Connor, T. Oldfield, K. O'Regan, E. Papworth, A. Rodrigues, R. Sinegar, P. Stromberg, W. Sutherland, D. Thomas, R. Trevelyan, G. Tucker, S. Wells, J. Williams and M. Wright. We thank two anonymous referees for helpful comments on an earlier draft of this article.
Biographical sketches
The authors, who made up the steering committee for the Cambridge Conservation Forum (CCF) collaborative project on Harmonizing Measures of Conservation Success, all work in conservation research and/or practice with organizations that are members of CCF. CCF exists to strengthen links and develop new synergies across the diverse community of conservation practitioners and researchers based in and around Cambridge, UK, working at local, national and international levels, and is currently comprised of more than 40 member organizations.