Hostname: page-component-cd9895bd7-7cvxr Total loading time: 0 Render date: 2024-12-18T06:47:18.147Z Has data issue: false hasContentIssue false

Evidence-based policy: What sort of evidence do governments need?

Published online by Cambridge University Press:  01 January 2023

Ann Nevile*
Affiliation:
Australian National University, Australia
*
Ann Nevile, Crawford School of Public Policy, The Australian National University, Canberra, ACT 0200, Australia. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Very few would dispute the proposition that evidence about the effects of different policy options should inform policy decisions. However, there is less agreement on the nature of the evidence needed. In addition, there may be problems in evaluating that evidence. This is particularly the case when experts offer conflicting advice. This article presents the position held by Professor Nevile that in giving policy advice to the government, it is almost always desirable to draw on a range of different policy instruments. While theoretical input is usually important, it is even more important that theory does not lose contact with the real world. Factual descriptions of the real world and the use of a theoretical toolkit containing more than one theory are essential in achieving this goal. These principles are illustrated by a discussion of a particular category of policy advice – the evaluation of government programmes.

Type
Symposium Articles
Copyright
© The Author(s) 2013

Introduction

Very few academics or policymakers would dispute the proposition that evidence about the effects, or likely effects, of various policy options should inform government decision-making. However, there is much less agreement over the extent to which this normative aspiration occurs in practice. Even if governments are genuinely committed to evidence-based policy, evidence-based policymaking is often difficult to achieve for political and other reasons. Policy design may owe more to political expediency or ideology, and there may be pressure to reach a decision quickly to obtain political advantage, or because speed is necessary if measures are to achieve their intended purpose, as was the case with the Rudd Government’s fiscal stimulus package introduced towards the end of 2008 in order to moderate the effects of the global financial crisis on the Australian economy.

In addition, disagreement among technical experts is not uncommon as different experts place different weights on relative costs and benefits. These differences may spring from the value base underlying a particular theoretical perspective, their institutional role, or support for a particular group or groups affected by the policy options under consideration. All of these can be important, although the influences of the value base underlying a particular theoretical perspective is the most insidious as it is often largely unrecognised, not only by those holding the values but by others as well. Finally, there can also be a lack of understanding of the relative merits of different forms of evidence and the associated difficulty in mapping overly theoretical forms of evidence to the ‘real world’ of policy implementation (HM Treasury, 2007). Consequently, it may not be clear to the government what evidence should be considered when formulating policy or evaluating existing programmes.

The following two sections of this reflection on evidence-based policy discuss in more detail the difficulties involved in governments practising evidence-based policymaking, focusing in particular on the consequences of disagreement among technical experts and the importance of ensuring the government is presented with different forms of evidence. This latter issue is a central concern in the work of my father, John Nevile, both in his academic work and in formal and informal consulting work undertaken for the Whitlam and Hawke Governments. The final section extends the previous discussion on different types of evidence by considering what sort of information governments need when evaluating specific government programmes.

Disagreement among experts

Disagreement among relevant technical experts usually has the effect of slowing the pace of change as governments decide on less radical alternatives or postpone making a decision (Reference Nevile and NevileNevile, 2002: 148). For example, the decision to float the Australian dollar was delayed by opposition to the move by the head of Treasury who believed that the costs of deregulation outweighed the benefits because deregulation would remove an important constraint on the government adopting inflationary policies. In October 1983, a capital inflow crisis again raised the question of deregulating foreign exchange markets. Although the Treasurer and his office, the Prime Minister and his office, the Reserve Bank and the Department of Prime Minister and Cabinet all believed the exchange rate should be deregulated, the Treasurer

… knew the decision was historic; he knew the consequences were unpredictable. He didn’t want the markets, domestic and abroad, thinking that the Australian treasury had opposed the move. Both for the record and his own reassurance Keating wanted the treasury on board. (Kelly, 1992 cited in Reference NevileNevile, 1997: 282)

Consequently, a compromise position was adopted where a number of changes were made to foreign exchange arrangements short of actually floating the dollar (Reference NevileNevile, 1997: 283).

As noted earlier, the head of Treasury was primarily concerned with minimising inflation. Reserve Bank officials shared this desire, but disagreed on how to achieve this aim. The position of the Reserve Bank reflected both their responsibility to maintain a well-functioning financial sector and the belief, current in central banks in Europe and other OECD countries, on the best way to do this. Interestingly, although it may not have been realised at the time, deregulating financial markets meant interest rates became a much more important policy tool for reducing inflationary pressure and hence increased the importance of actions taken by the Reserve Bank.

Apart from illustrating the effects of disagreement among technical experts, events leading up to the deregulation of Australia’s exchange rate and removal of almost all exchange controls also illustrate the limits of evidence presented by technical experts in the face of institutional and personal attributes of the person with final authority to determine what evidence the government needs in order to make a major policy change.

Although senior Reserve Bank officials believed exchange controls should go, they did not believe that removal of almost all exchange controls was politically possible. Therefore, when summoned to Canberra on 9 December 1983 to discuss another looming foreign exchange crisis, senior Reserve Bank officials went into the meeting with the proposal to remove not all exchange controls but only those necessary in order to float the dollar. The Reserve Bank’s ‘War Book’ – a document dating back to 1975, which had been put together to take advantage of windows of opportunity that might emerge to move towards deregulation of the exchange rate and exchange controls – reflected this expectation. However, a window of opportunity far wider than anticipated arose during the 9 December meeting when, in the middle of a discussion about what would happen to the exchange rate and currency flows depending on what decisions were made about exchange controls, the Prime Minister turned to the Reserve Bank Governor and said, ‘if you had the option of getting rid of all the exchange controls, Governor, would you do it?’ The view of the Governor, Deputy Governor and the Chief Manager, Financial Markets, was that it would be difficult to manage, but if the window of opportunity was there, you had better take it because you never knew if such an opportunity would occur again. Although Hawke’s senior economic advisor was generally opposed to exchange controls, he did not suggest to the Prime Minister that such a question be asked (Reference NevileNevile, 1997: 284 and 291). The final outcome probably would have been different if someone other than Hawke had been the Prime Minister.

Different types of evidence

My father, John Nevile, holds very strongly that in both academic writing on policy issues and in advising governments on policies to deal with specific problems, it is almost always desirable to draw on a range of different types of policy instruments. He believes that theoretical input is usually important, but it is even more important to ensure that theory does not lose contact with the real world.

Factual descriptions of relevant institutions, both those suggested by theory and those suggested by intelligent observation of what is actually occurring, help to prevent this loss of contact. So too does having a theoretical toolkit where more than one theory is available to be used.

For example, in 1992–1993, my father was one of two commissioners in the Industry Commission Inquiry into Public Housing. The commissioners were supported by Industry Commission staff who provided information on a wide range of issues to assist in the writing of the report and the formulation of recommendations. One question to be decided was the nature and extent of Commonwealth rent assistance to low-income families renting in the private sector. The information used to help consider this issue included a number of theories, ranging from demand and supply analysis to aspects of welfare theory. Factual information provided included estimates of the elasticities of demand and supply in the private sector for rental properties for those on low incomes, the current conditions for Commonwealth assistance to renters in the private sector and the demonstrated inequities resulting from the combination of demand and supply and current Commonwealth assistance. Information on the effect of capital gains on the longer term cost of providing assistance to low-income families in public housing and the private sector and a social audit of housing assistance approaches was also provided.

In the light of all the information provided, the report concluded that the provision of public housing assistance was the most cost-effective method of providing housing to those on low incomes, but that in current and foreseeable circumstances, there was also a need for rental assistance for low-income earners renting in the private sector. The report also recommended major changes in eligibility conditions for rent assistance given to those in the private sector, but noted that some of the most important of those changes should not be implemented until there was a review of the rate of withdrawal of all Commonwealth welfare measures in order to minimise the existence of poverty traps (Industry Commission, 1993).

Evaluating government programmes

While technical experts often disagree over the interpretation of various forms of evidence, governments are attracted to the idea of evidence-based policy because of the additional authority provided by evidence, which is regarded as ‘objective’ by the majority of relevant technical experts, particularly when funding evaluations of high profile or expensive government initiatives. Politicians are supported in the belief that evidence generated by technical experts is objective by those technical experts, many of whom come from social science disciplines, such as economics, which seek to emulate the natural sciences by isolating causal factors through experimental research methods. While alternative approaches exist, for example, the interpretive approach to public administration developed by Mark Bevir and Rod Rhodes, which relies less on formal models than on contextual and historical explanations (Reference BevirBevir, 2011), these alternative approaches remain on the periphery of most social science disciplines, dismissed because in such an approach theories or causal claims become objective through comparison with rival accounts rather than through specification of appropriate variables and isolation of causal factors (Reference BevirBevir, 2011: 191 and 193). For example, when the UK Government decided to evaluate Sure Start Local Programmes (SSLPs), Footnote 1 there was considerable debate over the evaluation design. Academics argued that the ¤12 million evaluation should be rigorous, that is, the evaluation should conform as closely as possible to the gold standard of experimental research, a randomised control trial (Reference EisenstadtEisenstadt, 2011: 53).

Comparing a group of individuals who received a particular intervention with a similar group who did not may be feasible when evaluating a small place-based pilot programme, but even in these cases, control group comparisons depend upon a relatively standardised set of interventions delivered to all participants. This becomes difficult when the programme to be evaluated incorporates community engagement in the design of programme activities, as was the case with SSLPs, which meant Sure Start was not one intervention but several hundred different interventions (Reference EisenstadtEisenstadt, 2011: 54–55). Furthermore, participants in one particular programme may well be participating in other programmes. For example, Sure Start was just one of a number of place-based initiatives aimed at poor neighbourhoods; consequently, it was difficult to disentangle the impact of Sure Start from New Deal for Communities, Education Action Zone and Health Action Zone (Reference EisenstadtEisenstadt, 2011: 56).

If broad conditions can be found that approximate a control group and a group undergoing a relatively standardised intervention, the experimental approach does provide an answer to the question of whether a particular programme achieved its stated, formal objectives. In the Australian context, the possibilities of being able to do this are rare. Policies may be designed with multiple objectives because policymaking is an inherently contested process. Participants in the process have different interests, which may, or may not, coincide. Policy is the outcome of negotiations among these differing interests, some of which have more power than others. The fact that policies may have multiple, even conflicting, objectives needs to be recognised when evaluating performance against objectives.

The traditional view of implementation assumes a linear relationship between policy and implementation. Policy is static, and once implemented, policy objectives will be achieved as long as certain enabling conditions (policy based on sound causal assumptions or theory, clear objectives, adequate resources, good co-ordination, ownership of the policy by implementing agencies and/or project beneficiaries) are present (Reference Hogwood, Gunn and HillHogwood and Gunn, 1993). In reflecting on his experience implementing development programmes for the UK Department of International Development, Reference MosseDavid Mosse (2004) challenges this traditional view of the policy/implementation relationship by arguing that the operational control that bureaucracies have over events and practices ‘on the ground’ is always constrained and often quite limited. However, what bureaucracies can control is how a policy problem is defined or framed.

Framing is important because the way in which a particular policy problem is defined suggests certain solutions while disallowing or minimising others (Reference BacchiBacchi, 1999). For example, if unemployment is defined as a problem of job seekers lacking relevant work skills or the motivation to find work, the appropriate policy response is to assist individual job seekers through education or training programmes, or programmes designed to increase work motivation. However, if unemployment is defined as a problem caused by lack of aggregate demand in the economy, very different policy responses are called for.

As more interests or groups publicly support or promote a particular framing, the more dominant that causal explanation becomes. Different groups may have their own reasons for supporting a particular way of thinking about a policy issue or problem; consequently, keeping the coalition of differing interests together requires work. Because of this requirement to keep the coalition of interests intact, Reference MosseMosse (2004: 646–647) argues that the primary function of policy is to mobilise and maintain political support, that is, to legitimise practice, not provide a roadmap for how to implement policy. Implementation is driven, not by the formal goals of policy design, but by organisational goals that revolve around the preservation of rules and administrative order (Reference MosseMosse, 2004: 648–652).

Consequently, governments need to know what happened, but they also need to know why. As a senior Commonwealth public servant noted,

[a] convincing evidence base will not redeem policy that is poorly integrated with the contexts or the communities they are developed to serve. To replicate successful reforms, we must engage with successful practitioners to isolate the specifics that underpin success. (Reference GriewGriew, 2009: 250)

An experimental approach that assumes homogeneity except for a standardised intervention will generate little, if any, information about the process of implementation. For this reason, I would argue that a pluralist approach to evaluation is necessary to provide the diversity of evidence that governments need when considering how existing programmes should be modified to increase their effectiveness.

For pluralist evaluators, there is no single, universal logic of evaluation that can be applied to all projects or programmes. Rather, pluralist evaluators believe that combining the strengths of the experimental method with the strengths of other methods allows researchers to avoid the weaknesses of the experimental method while utilising its power (Reference ReinharzReinharz, 1992: 180). In other words, a pluralist approach

  • Provides a complicated but realistic answer to the question of whether policy objectives were achieved;

  • Can explain failure because the evaluation looks at process as well as outcomes;

  • Can identify the costs of success, that is, the unanticipated consequences of policies;

  • Facilitates the implementation of research results because it is less likely that stakeholders will argue that their interests have not been taken into account.

Perhaps because John Nevile always regarded economics as more of an art than a science, he had a strong belief in the benefits of a pluralist approach and was therefore happy to work with non-economists in a truly collaborative manner, as exemplified in a joint research project that evaluated the effectiveness of the Work for the Dole programme. Footnote 2

Work for the Dole began in 1997 as a voluntary pilot programme providing work experience for 10,000 young people in 174 community projects. Five years later, when the research was carried out, the programme had expanded to 55,000 budgeted places in 2002/2003 and had become the default mutual obligation option for unemployed young people and adults between the ages of 18 and 40 years. Adults aged between 40 and 50 years who did not undertake the lesser default option of community work were also required to participate in Work for the Dole. Overall, the main thrust of the programme was towards young people. In 2002/2003, more than half of the Work for the Dole participants were aged below 25 years. The formal objectives of the programme were to

  • Develop work habits in young people;

  • Involve the local community in quality projects that provide work for young people and help unemployed young people at the end of projects;

  • Provide communities with quality projects that are of value to the community (Department of Employment, Workplace Relations and Small Business (DEWRSB), 1999 cited in Reference Nevile and NevileNevile and Nevile, 2003).

What the government meant by ‘quality projects’ or ‘help’ was not defined, and the consequent lack of definitional precision increased the likelihood of tension between these formal objectives. For example, it was not clear whether providing communities with projects valued by the community was more important than work experience projects that helped unemployed young people at the end of the project.

In addition to the potentially conflicting formal objectives, there were two unstated, but equally important, political objectives. The first was sending a message to the wider electorate that the government believed those receiving unemployment benefits, particularly young people, should do something to help their community, a belief that the government knew was shared by a significant majority of voters (Reference Nevile and NevileNevile and Nevile, 2003: 18). The second was to motivate unemployed young people to increase their job search activity, or enrol in formal training or education courses, or declare previously undeclared earnings by threatening them with the prospect of participation in a programme that was seen as an unattractive option. While the Federal Government provided accurate information on the Centrelink website listing the full range of activities available to Work for the Dole participants, the government valued the compliance effect described above, and even after the programme had been operating for 5 years, many potential participants and members of the public still believed Work for the Dole offered nothing more than repetitive manual labour or unpleasant work such as cleaning public toilets or picking up used syringes (Reference Nevile and NevileNevile and Nevile, 2003: 23–25). Finally, for participants and most of the community organisations that developed and supervised Work for the Dole projects, the aim of the programme was not merely to ‘develop work habits’, but to find employment (Reference Nevile and NevileNevile and Nevile, 2003: 18).

Utilising a range of research methods and types of data was essential if the research was to generate evidence as to the effectiveness of the programme in terms of helping participants find jobs as well as the reasons for the finding that Work for the Dole was, by international standards, relatively successful at helping youth and young adults find employment, despite the fact that employment outcomes were not part of the programme’s formal, stated objectives. Footnote 3 The finding that Work for the Dole was relatively successful at helping youth and young people find employment was of interest to the government, but even if the research had found that Work for the Dole did not help participants find jobs, the government would not have ceased funding the programme. Qualitative data based on the experiences of participants and those running the Work for the Dole projects enabled the researchers to identify reasons for the programme’s relative success with some participants and lack of success with older age groups, as well as identifying implementation effects such as the effect of multiple objectives, the lack of flexibility in Departmental guidelines, inappropriate monitoring processes and the underfunding of the programme, particularly in relation to training.

The combination of technical statistical techniques for measuring net impact and a thick description of processes, which did not privilege one form of evidence over another, provided a balanced and accurate assessment of the programme, which resonated with those who worked in the sector and ensured that the research ultimately achieved its objectives of informing discussion about how the effectiveness of the programme could be increased, to the extent that the government, albeit gradually over a couple of years and with little fanfare, adopted about half of the recommendations that arose out of research findings.

Funding

This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

Footnotes

Declaration of conflicting interests

The author declares that there is no conflict of interest.

1. Sure Start was an area-based initiative, which aimed to work with parents to promote the physical, intellectual and social development of pre-school children. It was allocated ¤450 million in its first 3 years (April 1999–2001) to support the establishment of 250 local programmes. Two years after the programme commenced, funding was significantly increased to support the establishment of a further 250 local programmes.

2. This research was funded by an Australian Research Council Linkage Grant LP0212040.

3. The research concluded that Work for the Dole had an employment net impact of around 10% (Reference Nevile and NevileNevile and Nevile, 2003: 49).

References

Bacchi, CL (1999) Women, Policy and Politics: The Construction of Policy Problems. London: SAGE.CrossRefGoogle Scholar
Bevir, M (2011) Public administration as storytelling. Public Administration 89(1): 183195.CrossRefGoogle Scholar
Eisenstadt, N (2011) Providing a Sure Start: How Government Discovered Early Childhood. Bristol: The Policy Press.Google Scholar
Griew, R (2009) Drawing on powerful practitioner-based knowledge to drive policy development, implementation and evaluation. In: Strengthening evidence-based policy in the Australian Federation: roundtable proceedings, Canberra, 17–18 August, vol. 1, pp. 249258. Productivity Commission.Google Scholar
HM Treasury (2007) Analysis for policy: evidence-based policy in practice. Available at: http://www.civilservice.gov.uk/wp-content/uploads/2011/09/Analysis-for-Policy-report_tcm6-4148.pdf (accessed 10 January 2013).Google Scholar
Hogwood, B, Gunn, L (1993) Why ‘perfect implementation’ is unattainable. In: Hill, M (ed.) The Policy Process: A Reader. New York: Harvester/Wheatsheaf, pp. 217225.Google Scholar
Industry Commission (1993) Public housing. Report no. 34, 11 November. Canberra: Australian Government Publishing Service.Google Scholar
Mosse, D (2004) Is good policy unimplementable? Reflections of the ethnography of aid policy and practice. Development and Change 35(4): 639671.CrossRefGoogle Scholar
Nevile, A (1997) Financial deregulation in Australia in the 1980s. The Economic and Labour Relations Review 8(2): 273292.CrossRefGoogle Scholar
Nevile, A (2002) Conclusion. In: Nevile, A (ed.) Policy Choices in a Globalized World. Huntington, NY: Nova Science.Google Scholar
Nevile, A, Nevile, J (2003) Work for the Dole: Obligation or Opportunity? Kensington, NSW, Australia: Centre for Applied Economic Research, University of New South Wales.Google Scholar
Reinharz, S (1992) Feminist Methods in Social Research. Oxford and New York: Oxford University Press.Google Scholar