Introduction
The elaboration and spreading of guidelines for managers of protected areas (PAs) play key roles in conservation policies, on a par with large-scale frameworks, such as the European Directives (Evans Reference Evans2012), or vehicles for conservation funding, such as dedicated parts of the Common Agricultural Policy in Europe (Linares et al. Reference Linares Quero, Iragui Yoldi, Gava, Schwarz, Povellato and Astrain2022). These various components of conservation policies are often interrelated, since institutions and non-governmental organisations (NGOs) financing conservation actions frequently condition funding upon the application of guidelines produced by the same or other institutions, whose legal status and role are entrenched in large-scale conservation frameworks.
The importance of evaluating conservation initiatives, which is increasingly acknowledged in the scientific literature (Álvarez-Fernández et al. Reference Álvarez-Fernández, Freire, Naya, Fernández and Sánchez-Carnero2020a, Reference Álvarez-Fernández, Freire and Sánchez-Carnero2020b, Pearson et al. Reference Pearson, Clark and Hahn2022), is relevant to these various components of conservation policies. Indeed, evaluations can help improve conservation by helping us to learn from past errors and by streamlining funding towards effective actions (Grantham et al. Reference Grantham, Bode, McDonald-Madden, Game and Knight2009, Bottrill et al. Reference Bottrill, Hockings and Possingham2011). This logic holds true both for concrete conservation actions in the field on a local scale and for conservation policies on national, regional or even global scales (Baylis et al. Reference Baylis, Honey-Rosés, Börner, Corbera, Ezzine-de-Blas and Ferraro2015). The European Natura 2000 policy is exemplary in this respect, as its implementation involves iterative evaluations (Jeanmougin et al. Reference Jeanmougin, Dehais and Meinard2017). More specifically, in the case of conservation guidelines, evaluation is needed because guidelines for managers can play major roles in the success or failure of conservation initiatives at two levels. First, ill-conceived guidelines can mislead managers into setting up and implementing wrongheaded conservation actions. Second, because institutions and NGOs financing conservation actions can condition funding upon their application, ill-conceived guidelines can channel funding towards defective conservation projects.
Evaluating guidelines, however, involves specific methodological challenges. Indeed, although evaluating conservation actions involves numerous difficult technical challenges, it can be done rather unequivocally, in a typical evidence-based approach (De Marchi et al. Reference De Marchi, Lucertini and Tsoukiàs2016), by quantifying whether the evaluated actions have had positive impacts on conservation targets, such as the populations of targeted threatened species (e.g., Sanderson et al. Reference Sanderson, Pople and Ieronymidou2015). Many such quantitative assessments of conservation effectiveness have been conducted in both the grey and the academic literature in recent years (Courrau et al. Reference Courrau, Dudley, Hockings, Leverington and Stolton2006, Stolton et al. Reference Stolton, Dudley, Belokurov, Deguignet, Burgess and Hockings2019). They show important promise to improve conservation actions (Courrau et al. Reference Courrau, Dudley, Hockings, Leverington and Stolton2006, Bottrill & Pressey Reference Bottrill and Pressey2012, Geldmann et al. Reference Geldmann, Barnes, Coad, Craigie, Hockings and Burgess2013, Watson et al. Reference Watson, Dudley, Segan and Hockings2014, Stolton Reference Stolton, Dudley, Belokurov, Deguignet, Burgess and Hockings2019).
In the case of management guidelines, however, many confounding factors can make it technically difficult – and conceptually questionable – to assess the quality of guidelines based on the success or failure of the actions they guide. Indeed, faultless guidelines can be ill-applied, ill-intentioned actors can undermine their application, unforeseeable political or socio-economic dynamics can render them inapplicable and there may be insufficient relevant data to implement effectiveness assessments. Therefore, evaluating guidelines requires a broader framework, overcoming the limitations of effectiveness assessment, by supplementing this criterion with other criteria.
Against this background, our focal question in the present article is: what framework can be used for evaluating conservation guidelines? To answer this question, we test two hypotheses.
The first hypothesis is that, although the academic conservation literature contains numerous, piecemeal attempts at evaluating various aspects of management guidelines, a general framework, whose relevance is proven in the academic scientific literature, is lacking. Notice that, as it is formulated here, this hypothesis focuses only on evaluation frameworks with proven academic scientific credentials and thereby excludes numerous frameworks produced and used by field experts or expert institutions, whose relevance is entrenched in practice rather than in academia. This reflects a basic assumption of ours in this article, according to which the abovementioned promises of evaluations are predicated on proven academic scientific robustness. This assumption should not be misunderstood as disparaging frameworks produced by field experts and institutions but as a reminder that academic science has an important role to play in validating the credentials of such frameworks.
To test our first hypothesis, we use an innovative, hybrid methodology based both on an interpretative approach anchored in social sciences and a quantitative review of the academic conservation literature.
The second hypothesis, which is inspired by Jeanmougin et al. (Reference Jeanmougin, Dehais and Meinard2017) and Choulak et al. (Reference Choulak, Marage, Gisbert, Paris and Meinard2019), amongst others, is that ‘policy analytics’ (Meinard et al. Reference Meinard, Barreteau, Boschet, Daniell, Ferrand and Girard2021), a framework introduced in, and currently mainly confined to, the literature in decision sciences, provides the general framework needed by encompassing all of the relevant piecemeal contributions found in the academic conservation literature. Policy analytics is a multicriteria framework that champions the use of three criteria, in addition to effectiveness, in policy evaluation:
-
Scientific credibility – this criterion refers to the need for the evaluated objects (in our case, guidelines) to be based on scientific findings. This echoes the numerous arguments advocating for the need for conservation policies to be anchored in conservation science (Dubois et al. Reference Dubois, Gomez, Carlson and Russell2020) in order to overcome knowledge or implementation gaps (Knight et al. Reference Knight, Cowling, Rouget, Balmford, Lombard and Campbell2008, Arlettaz et al. Reference Arlettaz, Schaub, Fournier, Reichlin, Sierro and Watson2010).
-
Operationality – this criterion refers to the idea that it should be possible to use the knowledge and approaches proposed in the assessed guidelines for day-to-day management (Jeanmougin et al. Reference Jeanmougin, Dehais and Meinard2017, Choulak et al. Reference Choulak, Marage, Gisbert, Paris and Meinard2019).
-
Legitimacy – this criterion refers to the fact that, because managers of PAs devise and implement actions that can, in some cases, conflict with other public policies, such as urban planning or economic development (and, in some cases, even use public financial and human resources for that purpose), conservation policies and actions should be acceptable to stakeholders (Meinard Reference Meinard2017, Arpin & Cosson Reference Arpin and Cosson2021).
Following our demonstration that this framework is relevant to evaluating conservation guidelines, we then illustrate this relevance by developing a pilot application to a particular guide for PA managers: the French ‘Guide for the elaboration of management plans for natural areas’ (http://ct88.espaces-naturels.fr). These guidelines were introduced by the French Biodiversity Agency, an institution entrusted by the central government with orchestrating all of the conservation policies devised and implemented at the national scale. Its purpose is to provide a single reference supporting in a coherent manner the work of all of the actors involved in elaborating management plans for PAs in France.
Materials and methods
Literature review
To identify a framework for evaluating conservation guidelines, we performed a literature review of studies devoted to evaluating conservation documents – not only guidelines but also management plans and programmes. This literature review uses a quantitative approach to capture relevant contributions and then analyses them in an interpretative approach based on a thorough reading of the selected papers and an interpretation of their content. It thereby combines the strengths of both quantitative methods and social-science interpretative reasoning.
As explained above, our hypotheses, to be tested thanks to this literature review, were (1) that there is no commonly accepted framework available in the academic literature to perform the kind of evaluation needed, and (2) that the three ‘policy analytics’ criteria encompass all of the relevant approaches available in the literature despite their diversity, and therefore constitute a general and robust evaluation framework.
The literature review was conducted using a standard four-stage process (Barreto et al. Reference Barreto, Di Domenico and Medeiros2020): (1) definition of the objectives guiding the review; (2) definition of the search protocol (database and search terms); (3) selection of articles based on predetermined criteria; and (4) analysis of the selected literature.
For this bibliographic research, we used the Web of Science (WoS) core collection database, one of the two main publication databases currently used by academic researchers (Pranckutė Reference Pranckutė2021). Several other databases could have been chosen, the most prominent being Google Scholar and Scopus. The former was excluded because, although it appears to have a wider coverage, it includes both academic publications and numerous other resources, such as unpublished reports or manuscripts. If the point had been to identify a diversity of contributions, including the grey literature, this would have been an asset. However, as explained in the ‘Introduction’ section, the hypotheses that we were concerned to test are only focused on the academic scientific literature, which made Google Scholar inappropriate. Scopus’s coverage is also considered broader than WoS’s according to recent analyses; however, for searches based on keywords, such as those we wanted to perform (see below), WoS is considered more efficient (Pranckutė Reference Pranckutė2021). We therefore worked with WoS; a comparative analysis using several databases might have yielded pertinent results but fell beyond our scope (see ‘Discussion’ section).
An initial search was done on the Web of Science core collection database on 6 February 2023, for the period before 2021, using the following request: [protected areas* AND (management OR ecological restoration*) AND (assessment* OR evaluation* OR analysis) AND (guide* OR manual OR tool OR plan)] on abstracts, titles and keywords. The set of articles obtained was then manually screened in an interpretative, social-science approach to select all of those articles that contained evaluation criteria liable to constitute a usable framework for evaluating conservation guidelines. This second step in the search was designed to eliminate articles (1) devoted to evaluating stakeholders’ perceptions of specific documents rather that evaluating the documents themselves, (2) focusing on specific, limited aspects of the document at issue without proposing criteria for evaluating management documents as a whole and (3) presenting an analysis of the management document on the basis of the topics covered without proposing transposable evaluation criteria.
For all of the articles selected using this procedure, we then identified the criteria on which their evaluations were based. In many cases, these criteria were not explicitly stated as such, and the identification was therefore to some extent interpretative. We then reformulated these criteria as synthetic questions. Lastly, the formulation of these synthetic questions was screened to identify keywords associated with the various criteria constituting the policy analytics framework. Lists of keywords were not defined ex ante but rather elaborated as we went along with the interpretation of criteria. Some keywords could refer to several of the policy analytics criteria; in these cases, the larger context provided by whole sentences articulating the criteria was used to identify interpretatively the policy analytics criterion or criteria (if any) to which the different occurrences of these keywords refer in each case. Similarly, an interpretative reading of the whole sentence was used when several keywords referring to different policy analytics criteria were present in the formulation of a single criterion from the literature at issue. This interpretative process eventually allowed us to determine whether the various criteria are encompassed or not in one policy analytics criterion or criteria.
Application
Based on the results from the bibliographic analysis, we then applied the identified relevant evaluation framework to the latest version of the French guidelines to develop management plans for PAs (‘Guide for the elaboration of management plans for natural areas’; http://ct88.espaces-naturels.fr). These guidelines were chosen because they represent an attempt at orchestrating all of the conservation policies devised and implemented at a relatively large scale (that of the whole of France). This analysis illustrates the applicability of our framework and points to the strengths and weaknesses of this particular document. This analysis also enables us to suggest means to improve this document.
Results
Article selection
The initial search yielded 3593 articles (Table S1); however, the ensuing interpretative selection procedure filtered out 3204 of them that in fact do not tackle the evaluation of conservation documents. Although they do perform evaluations of conservation initiatives or documents, 367 of them failed to propose transferable criteria and 60 focus only on effectiveness. In the end, only 22 articles provide a possibly transferable framework based on clearly articulated evaluation criteria other than effectiveness (Table S1). This first result echoes the intrinsic difficulty of evaluating guidelines.
The 22 articles finally selected are mainly relatively recent contributions to the literature (63% (n = 14) published after 2013). These articles are published in 10 journals, the most frequent being Environmental Management (n = 5) and Ocean and Coastal Management (n = 4). Articles with case studies are the most frequent, and they concern 15 countries, the most represented being France (n = 6), Spain (n = 4), Portugal (n = 4) and England (n = 4). They concern 36 types of PAs, the most frequently covered being national parks (n = 8) and marine nature parks (n = 7). The conservation documents evaluated are mainly management plans (n = 13); others are work programmes, guides for the elaboration of management plans and monitoring programmes.
Identification and analysis of evaluation criteria
The criteria used in the various selected articles, rearticulated in synthetic questions, are presented in Table 1. Column C2 lists these synthetic questions, article by article (column C1). When similar criteria are shared by different articles, these articles are grouped together in column C1 (e.g., line L6, which groups three papers using the same criteria). If a commonly accepted set of evaluation criteria had been available, it would have appeared as a single cell or group of cells in column C2, attached to a single cell in column C1 grouping an important part of the population of papers (Table 1). This is not the case, as the most populous cell in column C1 contains only three papers, four cells contain two papers and 14 cells out of 18 contain only one paper. This first analysis allows validation of our first hypothesis, as it shows that no commonly accepted set of evaluation criteria currently exists.
Le. = legitimacy; Op. = operationality; PAME = Protected Area Management Effectiveness; RAPPAM = Rapid Assessment and Prioritization of Protected Area Management; Sc. = scientific credibility; SMART = Specific, Measurable, Achievable, Relevant and Time-bound.
The subsequent analysis, striving to identify whether the various proposed criteria can be encompassed in one or several of the policy analytics criteria, shows that the criteria used in the 22 articles can all be interpreted as special cases of the policy analytics criteria (Table 1, column C3). Among the 22 articles, 18 champion criteria that can be interpreted as variants of a general criterion of operationality. These criteria refer to requirements to take administrative, legal or financial constraints into account, to cogently organize human and material resources or to use relevant organizational tools. Nineteen of the 22 articles put forward criteria capturing aspects of legitimacy. These criteria mention the need to include various types of stakeholders, the importance of discussing and/or assessing values and the ways by which the public or different relevant communities were involved. Lastly, 12 articles promote criteria reflecting scientific credibility requirements. Such criteria mention the need to anchor management in updated knowledge and data, to implement relevant monitoring schemes or to use concepts and framework accepted in the scientific community.
Application of the evaluation framework
The application of the evaluation framework constituted by the scientific credibility, operationality and legitimacy criteria to the French ‘Guide for the elaboration of management plans for natural areas’ highlights considerable weaknesses with respect to the three criteria (see also Osorio et al. Reference Osorio, Schmitt, Badariotti and Meinard2023, which expands on some of these issues).
In terms of scientific credibility, this analysis shows that:
-
The French ‘Guide for the elaboration of management plans for natural areas’ promotes the use of Red Lists and similar species lists (p. 29 of the downloadable pdf file) without mentioning the uncertainties and biases affecting them (Yang et al. Reference Yang, Ma and Kreft2013, Beck et al. Reference Beck, Böller, Erhardt and Schwanghart2014, Meyer et al. Reference Meyer, Kreft, Guralnick and Jetz2015, McRae et al. Reference McRae, Deinet and Freeman2017, Jarić et al. Reference Jarić, Quétier and Meinard2019; problem S1).
-
It wrongly assumes (p. 29) that experts in the field can perform analyses of ecosystem functioning (Pe’er et al. Reference Pe’er, Mihoub, Dislich and Matsinos2014, Jeanmougin et al. Reference Jeanmougin, Dehais and Meinard2017, Troudet et al. Reference Troudet, Grandcolas, Blin, Vignes-Lebbe and Legendre2017, Jarić et al. Reference Jarić, Quétier and Meinard2019, Sutherland et al. Reference Sutherland, Taylor, MacFarlane, Amano, Christie and Dicks2019; problem S2).
-
It ignores that, according to the academic literature in conservation, assessing representativeness is a major global challenge (p. 29; Anthamatten & Hazen Reference Anthamatten and Hazen2007), mainly because of a lack of data in inventories (Fedorov et al. Reference Fedorov, Muldashev, Martynenko, Baisheva, Shirokikh and Elizaryeva2020; problem S3).
-
It ignores difficulties in choosing how to aggregate various dimensions or criteria to produce overall assessments of the value of natural sites (p. 30; e.g., Schwartz et al. Reference Schwartz, Cook, Pressey, Pullin, Runge and Salafsky2018, Choulak et al. Reference Choulak, Marage, Gisbert, Paris and Meinard2019; problem S4).
-
It articulates recommendations on how to frame objectives that are at odds with the acknowledged importance of assessing the achievement of targets (p. 35; e.g., Ferraro & Pattanayak Reference Ferraro and Pattanayak2006; problem S5).
-
It ignores the literature highlighting the need to assess the influence of external factors (p. 35; Holling Reference Holling1996, Apitz Reference Apitz2008, Santos & Schiavetti Reference Santos and Schiavetti2014, Bennett et al. Reference Bennett, Roth, Klain, Chan, Christie and Clark2017, Sendzimir et al. Reference Sendzimir, Magnuszewski, Gunderson, Schmutz and Sendzimir2018; problem S6).
-
It fails to promote the ongoing flexibility and adaptation of practices as well as the cooperation between experts, scientists and managers and mutual learning emphasized in the literature on adaptive management (one single reference to adaptive management, p. 39, without explanation nor operational details; Folke et al. Reference Folke, Hahn, Olsson and Norberg2005, Bormann et al. Reference Bormann, Haynes and Martin2007, Ananda & Proctor Reference Ananda and Proctor2013; problem S7).
-
It downplays the difficulties in choosing or constructing indicators (pp. 42–43; Bouyssou et al. Reference Bouyssou, Marchant, Pirlot, Perny, Tsoukiàs and Vincke2000, Hallam et al. Reference Hallam, Wintle, Kujala, Whitehead and Nicholson2020; problem S9).
-
It ignores the literature on the importance and complexity of stakeholder identification and participatory processes (p. 62; Luyet et al. Reference Luyet, Schlaepfer, Parlange and Buttler2012, Paletto et al. Reference Paletto, Hamunen and De Meo2015, Kovács et al. Reference Kovács, Kelemen, Kiss, Kalóczkai, Fabók and Mihók2017; problem S10).
In terms of operationality:
-
The French ‘Guide for the elaboration of management plans for natural areas’ fails to discuss operational procedures for assessing representativeness (p. 29; Mingarro & Lobo Reference Mingarro and Lobo2018, Fedorov et al. Reference Fedorov, Muldashev, Martynenko, Baisheva, Shirokikh and Elizaryeva2020, Milla-Figueras et al. Reference Milla–Figueras, Schmiing, Amorim, Horta e Costa, Afonso and Tempera2020; problem O1).
-
It fails to explain how the analysis of ‘influencing’ or ‘stress’ factors should be carried out (p. 36; problem O2).
-
It fails to explain how managers should choose indicators to structure monitoring and evaluation (p. 42; problem O3).
-
It fails to explain how stakeholders should be identified and recruited (p. 62; Paletto et al. Reference Paletto, Hamunen and De Meo2015; problem O4).
In terms of legitimacy:
-
The French ‘Guide for the elaboration of management plans for natural areas’ fails to discuss the various actors’ responsibilities and strategies as well as actions to strengthen accountability (p. 39; problem L1).
-
It fails to justify the key choices underlying the definition it gives to operational objectives (p. 35; problem L2).
-
It fails to promote discussions on the values underlying the tools used, such as Red Lists and similar species lists (p. 29; problem L3).
-
It promotes the search for consensus (p. 62), thereby ignoring that consensus-seeking can nullify the possibility of debating different positions without having to resort to violence, prevent an in-depth analysis of conflicts and obscure the hegemony of certain actors (Mouffe Reference Mouffe2005, Arpin Reference Arpin2019; problem L4).
Recommendations
The literature suggests that the weaknesses identified by our evaluation can all be addressed by implementing relevant participatory processes involving both local communities and a diversity of knowledge-holders, including experts and scientists. Indeed, by involving scientific experts, participation can help strengthen scientific robustness (scientific credibility), and the co-construction with local actors and operational workers can help fix operational problems (operationality). In addition, the inclusion of stakeholders with diverse views and values can strengthen legitimacy by initiating constructive discussions on values (García-Montes & Monreal Reference García-Montes and Arnanz Monreal2019) and, depending on the specific situation, either by enabling stakeholders to build a shared vision of the future (Santana-Medina et al. Reference Santana-Medina, Franco-Maass, Sánchez-Vera, Imbernon and Nava-Bernal2013) or by enabling the open acknowledgement of irreducible disagreements.
The fact that guidelines such as those analysed here are plagued by problems that participatory processes can fix shows that participation, although routinely and repeatedly referred to in guidelines, is insufficiently dealt with in such documents, which underestimate the difficulty of setting up and implementing participatory processes (Osorio et al. Reference Osorio, Schmitt, Badariotti and Meinard2023).
Discussion
The main result of this analysis is that the relevant academic literature in conservation is sparse and heterogeneous, but a relevant encompassing framework is provided by the literature in decision sciences on the ‘policy analytics’ framework. Like most scientific studies based on literature reviews, this analysis admittedly neglects the grey literature, because the latter is excluded from large-scale homogeneous bibliographic databases such as the one used here. However, as explained above, excluding the grey literature is justified when the aim is to identify frameworks for which the robustness is buttressed in the academic scientific literature.
In addition, most of the articles analysed in Table 1 refer to and are based on important contributions to the grey literature, which are duly referred to. This suggests that our analysis indirectly encompasses at least part of the relevant grey literature. That said, the grey literature certainly contains other useful frameworks that are ignored by the academic scientific literature. This conjecture suggests that academic scientific evaluations of such contributions to the grey literature are needed to entrench their scientific credentials and, incidentally, to increase their visibility. A systematic review of evaluation frameworks published in the grey literature and a systematic meta-evaluation of their scientific credentials would accordingly be major contributions. Dedicated methodologies will have to be devised for that purpose, as identifying and screening the grey literature involves numerous major challenges. All of this falls beyond the scope of the present paper.
Comparing our results with those obtained using other large-scale bibliographic databases, such as Scopus, could also bring complementary insights. However, a similar analysis of Scopus could not possibly invalidate our key message, according to which there is no dominant evaluation framework for conservation guidelines in the academic literature. Indeed, although Scopus is known to be more extensive in some domains, even if all of the records included in Scopus but not Web of Science were to share a unique framework, which seems unlikely, such a framework would not dominate the Scopus plus Web of Science corpus.
Another improvement that future studies could take upon themselves is to test the robustness of the interpretative steps of our analyses. We characterize as ‘interpretative’ the operations that consisted in reformulating criteria in synthetic questions and in identifying keywords referring to the various policy analytics criteria. Empirical robustness tests could be implemented by asking a diverse set of experts to propose their own reformulations and keywords.
Another, possibly more promising refinement of our analysis would be to test whether the ‘policy analytics’ criteria can be rendered more precise whilst retaining their ability to encompass the criteria we identified in the scientific literature. Indeed, a plausible criticism that could be raised against our approach is that the ‘policy analytics’ criteria are exceedingly vague, and that this vagueness alone explains why they encompass all of the criteria proposed in the literature. This suspected vagueness of the framework has been discussed in the literature in the decision sciences and management (e.g., Meinard et al. Reference Meinard, Barreteau, Boschet, Daniell, Ferrand and Girard2021), with proposals given of more precise definitions of especially complex concepts, such as legitimacy. This literature can be used to identify directions for testing more precise variants of the framework.
The second task performed in this study consisted in applying the three criteria of legitimacy, operationality and scientific credibility to specific guidelines for managers of PAs. This application illustrates that, although the criteria proposed in our framework are arguably more abstract than those identified in the conservation literature, this abstractness does not come at the expense of applicability. The main conclusion of the application was that the evaluated guidelines are plagued by significant weaknesses that could be overcome by implementing relevant participatory processes. Some initiatives arguably go in the direction of implementing participation that might be able to address the kind of problems that we pinpointed in this analysis. For example, the German procedure to draw up management plans for Natura 2000 sites (e.g., in Baden-Württemberg State, Germany; https://pd.lubw.de/69643) involves the wide diffusion of preliminary layouts of management plans associated with public hearings, on-site debates with stakeholders and websites presenting management actions. However, the associated guidelines do not detail how such mechanisms should be chosen and implemented. This loophole echoes the multiple weaknesses in the application of participation that generally plague current PA management in Europe (Piwowarczyk & Wróbel Reference Piwowarczyk and Wróbel2016, Kovács et al. Reference Kovács, Kelemen, Kiss, Kalóczkai, Fabók and Mihók2017, Álvarez-Fernández et al. Reference Álvarez-Fernández, Freire, Naya, Fernández and Sánchez-Carnero2020a, Reference Álvarez-Fernández, Freire and Sánchez-Carnero2020b). The lesson learnt from our analysis of management guidelines hence appears to hold true more generally for a vast array of conservation policy tools.
However, the very idea that participation should be encouraged in conservation decision-making, which constitutes the backbone of our recommendations, is not without its critics. Indeed, participation does not always strengthen conservation (Young et al. Reference Young, Jordan, Searle, Butler, Chapman, Simmons and Watt2013): it increases the time needed to develop management strategies and their costs (Paletto et al. Reference Paletto, Hamunen and De Meo2015), and it can be used as a manipulative tool to reproduce unequal power relations and reinforce the dominance of certain forms of knowledge (Turnhout et al. Reference Turnhout, Metze, Wyborn, Klenk and Louder2020). To overcome such problems, Osorio et al. (Reference Osorio, Schmitt, Badariotti and Meinard2022) champion ‘counter-argumentative participation’, defined as a process by which different stakeholders influence decision-making by expressing criticisms and counter-arguments. How such recommendations can be integrated into conservation guidelines such as those analysed here remains to be formally established; so is the extent to which they can solve the problems facing conservation practitioners in the field.
Supplementary material
To view supplementary material for this article, please visit https://doi.org/10.1017/S0376892924000055.
Acknowledgements
We thank the research team at LIVE, A Bagaeva, A-C Vaissière, E Hassenforder, A Mangos, A Richard, L Germain and KM Wantzen, for their comments on earlier versions of this article, the editors and reviewers for their powerful comments and criticisms of the submitted version and I Villa for her thorough linguistic review of the main text.
Financial support
The PhD of Angela Osorio was funded by the University of Strasbourg.
Competing interests
The authors declare none.
Ethical standards
None.