Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-27T19:11:12.026Z Has data issue: false hasContentIssue false

Benefit-Cost Analysis of Prevention and Intervention Programs for Youth and Young Adults: Introduction to the Special Issue

Published online by Cambridge University Press:  29 December 2015

Margaret R. Kuklinski*
Affiliation:
Social Development Research Group, School of Social Work, University of Washington, Seattle, WA 98115, e-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Benefit-cost analysis (BCA) and related economic evaluation methods (cost analysis and cost-effectiveness analysis) have increasingly been applied to prevention and intervention programs for youth and young adults to assess their costs as well as the gains that may be anticipated from investing in these programs. This work reflects in part the growing prominence of evidence-based programs, policies, and practices as well as evidence-informed decision making. The papers included in this special issue represent a range of topics and issues, including the need for accurate and comprehensive assessment of program costs, high-quality BCAs of prevention and intervention programs, increasing recognition of the importance of monetizing noncognitive outcomes, and the role of BCA in pay for success financing arrangements. This introduction (a) describes the evidence-based context in which this work plays a role, (b) summarizes the practical and theoretical contributions of the papers, and (c) identifies the common themes.

Type
Articles
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© Society for Benefit-Cost Analysis 2015

1 Introduction

The decision to devote an entire issue of the Journal of Benefit-Cost Analysis to prevention and intervention programs for youth and young adults reflects intersection of the economic evaluation and social program fields over the past two decades. Benefit-Cost Analysis (BCA) and related economic evaluation methods (cost analysis and cost-effectiveness analysis) have increasingly been applied to prevention and intervention programs to assess their costs as well as the gains that may be anticipated from investing in these programs. From relatively early assessments of the benefits of high-quality preschool in relation to costs (Barnett, Reference Barnett1996; Masse and Barnett, Reference Masse and Barnett2002; Karoly, Kilburn & Cannon, Reference Karoly, Kilburn and Cannon2006), BCAs published in peer-reviewed journals now extend to a wide range of social programs (see, for example: BCAs of community prevention systems – Kuklinski, Fagan, Hawkins, Briney & Catalano, Reference Kuklinski, Fagan, Hawkins, Briney and Catalano2015; divorcing families intervention – Herman et al., Reference Herman, Mahrer, Wolchik, Porter, Jones and Sandler2015; substance abuse prevention – Miller and Hendrie, Reference Miller and Hendrie2009; Plotnick, Young, Catalano & Haggerty, Reference Plotnick, Young, Catalano, Haggerty, W. J. and Evans1998, prekindergarten – Schweinhartet al., Reference Schweinhart, Montie, Xiang, Barnettc, Belfield and Nores2005; Spoth, Guyll & Guyll, Reference Spoth, Guyll and Day2002; therapeutic foster care intervention – Zerbe et al., Reference Zerbe, Plotnick, Kessler, Pecora, Hiripi, O’Brien, Williams, English and White2009; youth mentoring programs – Villar & Strong, Reference Villar and Strong2007). The papers included in this volume represent a range of topics and issues relevant to BCAs of prevention and intervention programs for youth and young adults: accurate and comprehensive assessment of program costs (Long, Brown, Jones, Aber & Yates, Reference Long, Brown, Jones, Aber and Yates2015), high-quality BCAs of intervention programs (Bowden & Belfield, Reference Bowden and Belfield2015; Cohen & Piquero, Reference Cohen and Piquero2015), increasing recognition of the importance of monetizing noncognitive outcomes (Jones, Karoly, Crowley & Greenberg, Reference Jones, Karoly, Crowley and Greenberg2015; Belfield et al., Reference Belfield, Bowden, Klapp, Levin, Shand and Zander2015), and the role of BCA in pay for success (PFS) financing arrangements (Temple & Reynolds, Reference Temple and Reynolds2015). The broader reach of BCA in the social program arena and the specific questions addressed in the papers in this issue are consistent with the increasingly prominent role of evidence-informed practice and decision making.

2 Benefit-cost analysis of social programs within the larger context of evidence and accountability

The application of BCA to prevention and intervention programs is a natural counterpart to heightened interest in evidence-based practice. First applied within the field of medicine in the 1990s (Sackett et al., Reference Sackett, Rosenberg, Gray, Haynes and Richardson1996), the practice of intervening according to the best available evidence relevant to the particular situation is now common in prevention, social work, psychology, and education. Emphasis on evidence-based practice in prevention and intervention has led to considerable investments in research demonstrating the impact of a wide range of social programs and policies, and in some areas the development of standards for establishing efficacy and/or effectiveness (Gottfredson et al., Reference Gottfredson, Cook, Gardner, Gorman-Smith, Howe, Sandler and Zafft2015). A number of clearinghouses have been created to summarize the results of these efforts and help guide work toward programs, practices, and policies with documented evidence of impact, including Blueprints for Healthy Youth Development (www.blueprintsprograms.com), the Coalition for Evidence-based Policy (coalition4evidence.org), Office of Juvenile Justice and Delinquency Prevention Model Programs Guide (www.ojjdp.gov/mpg), What Works Clearinghouse (ies.ed.gov/ncee/wwc), and the National Registry of Evidence-based Programs and Practices (www.nrepp.samhsa.gov). Systematic reviews, such as those published by the Campbell Collaboration (www.campbellcollaboration.org) and the Cochrane Collection (www.cochrane.org) serve a similar purpose of helping to align practice with the evidence. The growing mandate for evidence-informed practice is reflected in recent executive and legislative branch actions at federal and state levels. These include the Obama administration’s tiered evidence initiatives, which build upon and expand requirements for evidence of impact as a precondition for grant making (e.g., Maternal Infant Home and Early Childhood Home Visiting Program, Teen Pregnancy Prevention Program, Haskins & Margolis, Reference Haskins and Margolis2014). Several states also have passed legislation tying funding to the use of evidence-based approaches (Tanenbaum, Reference Tanenbaum2005; Trupin & Kerns, Reference Trupin and Kerns2015; Washington State Institute for Public Policy, 2013). As another example, the United States Preventive Services Task Force makes recommendations for clinical services based on whether available evidence of net benefit to patients is strong (Siu, Bibbins-Domingo & Grossman, Reference Siu, Bibbins-Domingo and Grossman2015).

These executive and legislative actions are possible because of the growth in the evidence base, yet they also signal the trend for greater governmental accountability for performance and investments, particularly those that use taxpayer dollars. Benefit-cost analyses of social programs have a natural role in establishing fiscal accountability, expanding the discussion about social programs from What works? to include What works, and are the benefits worth the cost? A leader in this area, the Washington State Institute for Public Policy (WSIPP) has been using economic evidence to inform policy questions for over 20 years (Lee & Aos, Reference Lee and Aos2011). One of WSIPP’s central objectives is to help the state legislature understand whether state investments in a program or set of programs are favorable. Over time, WSIPP has developed a BCA model and software tool to aid in its analyses; the model has been applied consistently to a number of policy areas, including prevention and intervention programs for children, youth, and families. Through the Results First Initiative funded by the Pew Charitable Trusts and the MacArthur Foundation (Results First Clearinghouse Database, 2015; White & VanLandingham, Reference White and VanLandingham2015), consultants are also expanding the role of benefit-cost analysis in state investment and policy decisions. They are working with 21 states to translate the WSIPP model and software tool to their specific state context. At the federal level, the use of BCA in legislation with significant regulatory impact became more common in the 1980s and has been formalized over time (Hahn & Dudley, Reference Hahn and Dudley2007). Detailed methodology for identifying costs and benefits exists as part of Circular A-4 guiding the use of BCA in regulatory analysis (Office of Management and Budget, 1992; Fraas & Lutter, Reference Fraas and Lutter2011). Though not yet common among grant funders, the Institute of Education Sciences also now requires that cost analysis be conducted as part of awards focussed on efficacy and replication (http://ies.ed.gov/funding/pdf/2015_84305A.pdf).

3 Range of issues addressed in the special issue

The papers included in this volume illustrate a range of economic concerns related to evidence-based practice and evidence-informed decision making about prevention and intervention programs. The implications of these papers are practical as well as theoretical, contributing to the economic evidence base and providing information that can stimulate investments in social programs.

3.1 Providing comprehensive assessments of the cost of social programs

Accurate and comprehensive estimates of social program costs are foundational to high-quality BCAs, and they have practical importance in their own right. Information about resource requirements and related costs, as well as the timing of investments, can help ensure that effects documented in demonstration trials are realized in subsequent real world implementation. They can also help program implementers choose programs that fit within their budget constraints and match their capacity for implementation. In spite of their utility, as Long and coauthors point out (Long et al., Reference Long, Brown, Jones, Aber and Yates2015) in “Cost Analysis of a School-Based Social and Emotional Learning and Literacy Intervention,” widely accepted standards for conducting cost analyses of social programs do not exist (Vining & Weimer, Reference Vining and Weimer2010). Reliance on budgetary data in some studies and actual resource use and related costs in others, different assumptions about the opportunity cost of volunteer time and other donated resources, and whether training and technical assistance are included in the analysis are some of the sources of variability across studies.

Comprehensive understanding of the full economic costs of implementing a program generally involves going beyond budgetary data to actual expenditures as well as the opportunity cost of donated resources common to social programs, such as volunteer time and program space (Karoly, Reference Karoly2012). Earlier prevention and intervention cost analyses have helped illustrate a variety of issues that need to be considered in thorough, high-quality cost analyses. These include assessing how intervention activities and associated costs vary over the life of a program (Crowley, Jones, Greenberg, Feinberg & Spoth, Reference Crowley, Jones, Greenberg, Feinberg and Spoth2012), the costs of training and technical assistance that support program efficacy (Kuklinski, Briney, Hawkins & Catalano, Reference Kuklinski, Briney, Hawkins and Catalano2012), and site-level variability in costs in multisite trials (Corso et al., Reference Corso, Ingels, Kogan, Foster, Chen and Brody2013).

The paper by Long and colleagues (2015) presents a cost analysis of the 4Rs (Reading, Writing, Respect, and Resolution), a universal, school-based social and emotional learning (SEL) program delivered in nine elementary schools. The study, which is part of a larger impact analysis, shows how two major approaches to estimating social program costs, the cost–procedures–processes–outcomes analysis (Yates, Delany & Dillard, Reference Yates2009) and ingredients method (Levin & McEwan, Reference Levin and McEwan2001), can be jointly applied to develop accurate and informative program cost estimates. The authors provide methodological detail and a variety of cost information, including total and per-student costs and variability in costs across schools and over time. With the rise in SEL interventions, Long et al.’s (Reference Long, Brown, Jones, Aber and Yates2015) analysis is likely to have utility in guiding the estimation of the costs of these interventions and helping SEL program implementers choose among alternative programs.

3.2 Applying benefit-cost analysis from a societal perspective to prevention and intervention programs

The papers by Bowden and Belfield (Reference Bowden and Belfield2015) and Cohen and Piquero (Reference Cohen and Piquero2015) represent classic applications of BCA to social programs. Each frames BCA within a societal perspective, assessing whether investment in programs serving at-risk youth or young adults pays off in the form of economic benefits over time that exceed costs. In “Evaluating the Talent Search TRIO Program: A Benefit-Cost Analysis and Cost-Effectiveness Analysis,” Bowden and Belfield (Reference Bowden and Belfield2015) draw attention to Talent Search, one of three related TRIO programs (Talent Search, Upward Bound, and Student Support Services) established in the 1960s as part of the War on Poverty. As Bowden and Belfield (Reference Bowden and Belfield2015) report, the programs received $768 million in federal funding in 2013–14 to support over 760,000 low-income students with the goal of increasing college attendance. Although they have a long history, these programs have not been subjected to economic evaluation. Bowden and Belfield (Reference Bowden and Belfield2015) amend the situation with their analysis of Talent Search, providing timely information that bears on debates over funding for TRIO related to reauthorization of the Higher Education Act (Perna, Reference Perna2015).

In contrast to BCA analyses that draw on findings from efficacy trials (Herman et al., Reference Herman, Mahrer, Wolchik, Porter, Jones and Sandler2015; Kuklinski et al., Reference Kuklinski, Fagan, Hawkins, Briney and Catalano2015), the rich analysis of Talent Search is one of few to apply BCA to an intervention conducted at a much greater scale. The paper is instructive in addressing several issues attendant in using a sample of intervention sites from two different states to derive overall economic estimates, including differences in costs, intervention emphasis, and impact. Bowden and Belfield (Reference Bowden and Belfield2015) illustrate their application of the ingredients method to assess costs, and show how existing models of lifetime economic gains from increasing educational attainment (high school and college) can be harnessed for their analysis. Breakeven analysis as well as sensitivity of BCA results in relation to differences in labor productivity growth, ability, and the discount rate are additional strengths of this paper, as is the authors’ exploration of variability in cost effectiveness across sites.

In “Benefits and Costs of a Targeted Intervention Program for Youthful Offenders: The YouthBuild USA Offender Project,” Cohen and Piquero (Reference Cohen and Piquero2015) shift focus to an intervention for youthful offenders. Targeting recent juvenile or young adult offenders, YouthBuild USA Offender Project (YBOP) participants were diverted from incarceration or had been referred after serving time in jail or prison. Participants were integrated into the larger YouthBuild program, acommunity-based job training and education intervention for at-risk low-income young adults. Participants gain construction skills that are applied to low-income housing projects, focus on obtaining high school diplomas or General Educational Development (GED) certificates, receive counseling support, and become part of a larger YouthBuild community. Over its 20-year history, YouthBuild has served over 130,000 students in the United States and has expanded to 15 different countries (youthbuild.org/our-impact).

Like Bowden and Belfield (Reference Bowden and Belfield2015), Cohen and Piquero (Reference Cohen and Piquero2015) apply economic evaluation to a social program that has been offered at large scale in comparison to many other prevention and intervention programs. Using a sample of 388 participants, they conduct a careful assessment of the societal level costs and benefits of YBOP. The latter follow from reduced recidivism and increased educational attainment among participants in relation to relevant comparison cohorts. Cohen and Piquero’s analysis is strong on several fronts, including its comprehensive look at costs of the program as well as the long-term value of reducing crime for offenders with at least one prior conviction. Moreover, it illustrates how two comparison cohorts, the National Longitudinal Survey of Youth and the Philadelphia Birth Cohort, were effectively enlisted in the analysis because YBOP had not been evaluated in a randomized controlled trial. Cohen and Piquero’s (Reference Cohen and Piquero2015) reasoning about the relevance of these cohorts, and their incorporation of breakeven analysis, offset concerns about possible selection bias and support conclusions reached.

These two examples of social program BCA demonstrate the substantial economic gains that can be anticipated from programs that effectively increase educational attainment and/or reduce crime. They benefit from several prior studies indicating lifetime economic gains from increasing high school and college graduation and estimating savings over a lifetime from reducing offending (e.g., Cohen & Piquero, Reference Cohen and Piquero2009; Heckman, Humphries, Veramendi & Urzúa, Reference Heckman, Humphries, Veramendi and Urzúa2014; Trostel, Reference Trostel2010). Moreover, these papers highlight the use of a variety of statistical methods to conduct economic analyses when evidence from randomized controlled trials is not available. Finally, in their focus on programs that have been implemented at a larger scale, results of these analyses are relevant to decisions to invest broadly in these programs.

3.3 Expanding the frontiers of benefit-cost analysis of prevention and intervention programs: Monetizing SEL outcomes

Three of the six papers in this volume address SEL or noncognitive outcomes in some manner (Belfield et al., Reference Belfield, Bowden, Klapp, Levin, Shand and Zander2015; Jones et al., Reference Jones, Karoly, Crowley and Greenberg2015; Long et al., Reference Long, Brown, Jones, Aber and Yates2015), a testament to the growing recognition that these outcomes are important predictors of positive development. Whereas Long and colleagues focus on the cost of an SEL intervention, Jones et al. (Reference Jones, Karoly, Crowley and Greenberg2015) and Belfield et al. (Reference Belfield, Bowden, Klapp, Levin, Shand and Zander2015) directly tackle the need to expand benefits models to directly incorporate noncognitive outcomes.

In “Considering Valuation of Non-Cognitive Skills in Benefit-Cost Analysis of Programs for Children,” Jones and colleagues (Reference Long, Brown, Jones, Aber and Yates2015) argue convincingly that the present limitations in valuing the noncognitive skills that are the focus of many prevention and intervention programs for children and youth impede thorough economic evaluation of these programs. Although these outcomes can be subjected to cost-effectiveness analysis, programs may be at a disadvantage in policy decisions that seek support from monetized outcomes. Moreover, when noncognitive outcomes are among several areas of impact, BCA becomes a desirable method because of its capacity for summing across monetized outcomes to estimate cumulative intervention impact.

Jones et al.’s (Reference Jones, Karoly, Crowley and Greenberg2015) study provides an in-depth review of the state of valuation of noncognitive skills. The paper begins with a summary of frameworks for characterizing noncognitive skills, considers measurement-related issues, reviews BCAs that have incorporated shadow prices of noncognitive skills, and describes research needed to facilitate valuation. This paper provides a conceptual roadmap that will be useful to others doing economic evaluation in this rapidly advancing area of intervention development and research.

Belfield et al. (Reference Belfield, Bowden, Klapp, Levin, Shand and Zander2015) echo the call for benefit-cost analysis to be expanded to incorporate SEL outcomes in “The Economic Value of Social and Emotional Learning.” After proposing an approach for establishing the benefits of social and emotional (SE) skills, they demonstrate how the approach can be applied to BCAs of four SEL interventions. As Belfield and coauthors point out, their goal is not to rank-order the interventions in terms of their efficiency gains; the latter would not make sense as the interventions have different emphases and impacts and therefore different benefits streams (as well as different cost structures). Rather, their intriguing paper demonstrates that, although further research to develop and validate shadow prices for SE skills is needed, their incorporation into BCAs leads to more comprehensive estimates of benefits. Under certain assumptions, these benefits are substantial. Taken together, Belfield et al. and Jones et al.’s papers identify many important questions that need to be addressed to move research in this area forward, and they also provide direction on how such advances can be achieved.

3.4 Using benefit-cost analysis to inform PFS financing

The last paper in the series, “Using Cost-Benefit Analysis to Scale up Early Childhood Programs Through Pay for Success Financing,” by Temple and Reynolds (Reference Temple and Reynolds2015), describes the role of BCA in social impact investing, an innovative mechanism for raising capital to expand the reach of effective prevention programs. Even when BCAs demonstrate that social programs are favorable investments that generate net benefits to society, fiscally constrained governments are limited in the programs they can support. Social impact bonds (SIBs), also known as “pay for success” (PFS) contracts, offer an alternative source of funds. Under this arrangement, private investors provide the capital to scale prevention programs, and government cost savings from successful program outcomes are used to repay investors.

The first arrangement of this kind was made between the United Kingdom and a nonprofit organization known as Social Finance, which raised $5 million in private capital to provide start-up funding to prevent juvenile reoffending (Crowley, Reference Crowley2014). Investors were to be repaid from avoided governmental expenditures if recidivism declined by 7.5%, including a bonus payment if recidivism dropped by 8% or more. Since that time, a number of PFS contracts have been established in the United States to expand prevention programs at a time when, as Temple and Reynolds (Reference Temple and Reynolds2015) report, political and economic pressures inhibit the use of tax revenues for the initial investment. They include contracts to prevent asthma (Clay, Reference Clay2013), reduce preterm births, prevent diabetes (Galloway, Reference Galloway2014), reduce recidivism (Butler, Bloom & Rudd, Reference Butler, Bloom and Rudd2013), serve homeless mothers with children in the foster care system (Greenblatt & Donovan, Reference Greenblatt and Donovan2013), and prevent special education placement (Temple & Reynolds, Reference Temple and Reynolds2015; Warner, Reference Warner2013).

With the promise of reducing pressures on the government to fund new programs, PFS has gained the attention of the Obama Administration, the U.S. Congress, and several states. For example, in 2012, the Obama Administration announced PFS as a major part of its social innovation strategy (Greenblatt, Reference Greenblatt2014). In 2014, Congress passed legislation to increase funding for PFS contracts (https://www.govtrack.us/congress/bills/113/hr4660), and the Department of Housing and Urban Development currently has a request for grant applications for PFS permanent supportive housing projects (www.pingree.house.gov/grant-opportunities/pay-success-permanent-supportive-housing-demonstration-deadline-feb-12-2016). Temple and Reynolds (Reference Temple and Reynolds2015) describe efforts in Utah and Colorado to support PFS contracts aimed at reducing special education placement.

Though attractive to the public sector and to those who want to expand the reach of prevention, PFS is not yet a proven mechanism for funding social programs. However, as Temple and Reynolds (Reference Temple and Reynolds2015) argue, prevention efficacy research and economic evaluation, specifically BCA, can play crucial roles in identifying viable candidates for PFS contracts. Research providing strong evidence for prevention program effects on behavioral health outcomes can lay a foundation for economic analysis documenting the timing and extent of anticipated economic gains. Temple and Reynolds (Reference Temple and Reynolds2015) show how research establishing the efficacy of Chicago Child Parent Centers, which was evaluated in the Chicago Longitudinal Study, and related benefit-cost analysis (Reynolds, Temple, White, Ou & Robertson, Reference Reynolds, Temple, White, Ou and Robertson2011), contributed to the development of a social impact borrowing contract to scale the program. Their paper illuminates several important issues requiring careful consideration for PFS contracts to be successful, including strong economic models estimating the magnitude and timing of benefits anticipated from prevention, the possibility of perverse incentives, need for evaluation by a neutral third party, and the role of foundations in reducing private investors’ exposure to risk in these contracts.

Temple and Reynolds (Reference Temple and Reynolds2015) also illuminate important differences in BCA as applied to PFS, which focus on cost savings to the government sector, and BCA conducted from a broader, societal perspective. For example, a program that may not be attractive from a PFS perspective may be attractive when viewed from a societal perspective. Moreover, as BCAs performed by WSIPP suggest, governments may be interested in more than the benefits to taxpayers that are central to PFS contracts (Lee & Aos, Reference Lee and Aos2011); WSIPP models, for example, go beyond taxpayer impacts to include quality of life gains from reducing victimization when crime is reduced. Their work suggests that the interests of PFS investors and governments may not fully overlap. Temple and Reynolds’ (Reference Temple and Reynolds2015) in-depth look at the use of BCA to support funding at scale of the Chicago Child Parent Centers intervention illuminates strengths and concerns in the use of PFS to scale prevention.

4 Future directions in benefit-cost analysis of prevention and intervention programs for youth and young adults

4.1 Common themes

Beyond their individual contributions, the collection of papers in the special issue suggest three themes pertinent to the larger context of evidence-based practice and policy as informed by economic evaluation.

Comprehensive analyses, that is, societal perspectives on benefits and costs, provide the optimal set of information for program implementers and decision makers. Societal perspectives on both benefits and costs ensure that decisions are made from the most informed position possible. Although values, political pressures, and other considerations may mean that some subperspectives (e.g., taxpayers) are given more weight than others, the societal perspective helps ensure that all perspectives can enter the decision-making process. On the cost side, comprehensive assessments of resources and related costs facilitate understanding of what is needed for high-quality program implementation that makes achieving outcomes and realizing economic gains more likely. Comprehensive assessments of costs, outlining where the major cost drivers lie, can also stimulate research and program refinements to increase resource and economic efficiency. On the benefit side, failure to account for all perspectives or to consider possible economic gains from important outcomes can result in downward bias to net present values, underselling the value of prevention and intervention programs. Although attaching monetary values to all economic outcomes may be the goal, Bowden and Belfield (Reference Bowden and Belfield2015) take a viable, conceptual approach; having established positive net benefit from gains in educational attainment, they acknowledge but do not estimate the health gains and reductions in crime that would make their results even more favorable.

In order for analyses to be comprehensive, shadow prices for noncognitive, social and emotional, and other “soft” skills that are common outcomes of prevention and intervention programs for youth need to be developed; shadow prices can establish the unique economic contribution of these outcomes, beyond what cognitive outcomes suggest. Many social programs target young children, long before they are independent members of society and part of the workforce. These programs really are investments, made to enhance the well-being and positive development of these young people over time. Current economic models can account for some of the economic gains and avoided costs of impactful programs. As youth development research increasingly recognizes the importance of noncognitive factors to positive development, including developmental outcomes with economic impact (e.g., educational attainment, substance use, delinquency and crime, and health), social program benefits models will need to expand if they are to be truly comprehensive. Such efforts are being supported by the RAND Corporation’s Valuing Outcomes of Social Programs (VOSP) project that is archiving estimates of such value from across the literature.

Benefit-cost analyses of prevention and intervention programs can help stimulate investment in these programs in at least three important ways. First, results of benefit-cost analyses, particularly net present values and related confidence intervals, can help identify social programs that represent an overall welfare gain to society, that is, those that are good investments of limited resources. As analyses by WSIPP and others have shown (http://wsipp.wa.gov/BenefitCost?topicId=9), not all effective programs will pass the benefit-cost test; although they may be impactful, some programs are not necessarily good public investments. Second, from an implementation standpoint, detailed information about costs, and to some extent benefits, can help schools, social service organizations, and community-based agencies understand which programs are best suited to their goals for reaching youth and young adults, capacity to implement them well, and budgetary constraints. Third, information about anticipated benefits streams, particularly fiscal impacts to governments, can facilitate the development of public–private–foundation partnerships to finance the implementation of evidence-based programs at scale.

4.2 Existing efforts to advance social program benefit-cost analysis

These themes reverberate in efforts, several currently under way, to increase the quality, comparability, and relevance of benefit-cost analysis of social programs.

Setting standards. Standards guiding BCAs of prevention and intervention programs could support quality in analyses as well as comparisons across analyses. When investment decisions involve choosing among alternative programs, economic evidence is most helpful if it can be compared in meaningful ways (Olson & Bogard, Reference Olson and Bogard2014). Papers by Karoly (Reference Karoly2012) and Vining and Weimer (Reference Vining and Weimer2010), originally part of a broader set of standards papers commissioned by the Society for Benefit-Cost Analysis, address the need for quality and standards. The Society for Prevention Research has also commissioned a task force charged with setting standards for BCAs of preventive interventions (Society for Prevention Research, 2014). Their report is due to be published in 2016.

Enhancing quality, utility, and use. Although benefit-cost analysis and economic evaluation more generally have practical implications for program implementers and those who make investment decisions, the divide between the production of these analyses and their actual use can be great. As several studies have shown (Oliver, Innvar, Lorenc, Woodman & James, Reference Oliver, Innvar, Lorenc, Woodman and James2014; Prewitt, Schwandt & Straf, Reference Prewitt, Schwandt and Straf2012), policy makers may not have timely access to high-quality research evidence, may not see it as relevant, and may not have the collaborations or relationships with researchers that build confidence in the data. The economic analyses performed by WSIPP at the direction of the Washington State legislature and the work being undertaken through the Results First initiative (Lee & Aos, Reference Lee and Aos2011; White & VanLandingham, Reference White and VanLandingham2015) are notable exceptions. To address the need for high-quality and useful economic evidence, the National Academy of Medicine has convened a consensus panel charged with improving the use of economic analysis to inform policy and investments for children, youth, and families (Institute of Medicine, 2014). Its final report and recommendations are also due in 2016.

Stimulating investments. As the paper by Temple and Reynolds (Reference Temple and Reynolds2015) illustrates, fiscal pressures across all levels of government have motivated the development of creative ways to finance the scaling up of effective prevention and intervention programs. In addition to providing additional funds, these “pay for success” or social impact bond arrangements typically shift risk away from government toward private investors and foundations. Benefit-cost analysis outlining the magnitude and timing of benefits, particularly fiscal benefits, have an important role to play in informing the terms of these contracts. The interest in PFS has led to the development of resources supporting the forging of these contracts, including the Nonprofit Finance Fund (payforsuccess.org), Institute for Child Success (http://www.instituteforchildsuccess.org/), and the Government Performance Lab at Harvard University Kennedy School of Government (http://siblab.hks.harvard.edu/).

Forging partnerships to build and utilize the economic evidence base. As economic evaluation and BCAs of social programs continue to grow, so too does the range of questions and issues addressed as well as their application to decision making (Crowley et al., Reference Crowley, Jones, Greenberg, Feinberg and Spoth2012). The Prevention Economics Planning and Research Network supported by the National Institutes of Health brings together prevention and intervention researchers from across the country to increase the use of benefit-cost analyses and build the science of investing in healthy development. This open network of economists, prevention scientists, and policy analysts provides support and coordination for efforts to strengthen methodology and accelerate efforts to understand the economic and fiscal impact of social programs. This includes projects around valuing proximal program outcomes for long-term projection (e.g., shadow prices, monetary conversion factors), accessing administrative data systems, and supporting rigor in performance-based financing (e.g., PFS). Importantly, this network actively engages end users to increase the utility of estimates—briefing Congress and engaging federal agencies on best practice and new findings.

5 Conclusion

The application of benefit-cost analysis to prevention and intervention programs for youth and young adults has grown tremendously over the past two decades, mirroring increases in knowledge about how to intervene effectively to prevent problems in development and how to respond to risks and problems so that developmental trajectories can be shifted for the better. As evidence of impact has grown, questions about the economic costs and benefits of successful programs have logically followed and in part represent the paradigm shift toward evidence-based practice, and more recently, evidence-based policy making. The papers in this issue spotlight important current themes in BCAs of social programs as well as efforts to advance BCA practice and utility and stimulate investments in social programs at scale.

References

Barnett, W. Steven (1996). Lives in the Balance: Age-27 Benefit-Cost Analysis of the High/Scope Perry Preschool Program. In Monographs of the High/Scope Educational Research Foundation, Number Eleven. Monograph Series, High/Scope Foundation, 600. North River Street, Ypsilanti, MI 48198-2898.Google Scholar
Belfield, Clive, Bowden, A. Brooks, Klapp, Alli, Levin, Henry, Shand, Robert & Zander, Sabine (2015). The Economic Value of Social and Emotional Learning. Journal of Benefit-Cost Analysis, 6(3), 508544.CrossRefGoogle Scholar
Bowden, A. Brooks & Belfield, Clive (2015). Evaluating TRIO: A Benefit-Cost Analysis of Talent Search. Journal of Benefit-Cost Analysis, 6(3), 572602.CrossRefGoogle Scholar
Butler, David, Bloom, Dan & Rudd, Timothy (2013). Using Social Impact Bonds to Spur Innovation, Knowledge Building, and Accountability. Community Development Investment Review, 9(1), 5358.Google Scholar
Clay, Rebecca F. (2013). Health Impact Bonds: Will Investors Pay for Intervention? Environmental Health Perspectives, 121(2), a45.CrossRefGoogle ScholarPubMed
Cohen, Mark A. & Piquero, Alex R. (2009). New Evidence on the Monetary Value of Saving a High Risk Youth. Journal of Quantitative Criminology, 25(1), 2549.CrossRefGoogle Scholar
Cohen, Mark A. & Piquero, Alex R. (2015). Benefits and Costs of a Targeted Intervention Program for Youthful Offenders: The YouthBuild USA Offender Project. Journal of Benefit-Cost Analysis, 6(3), 603627.CrossRefGoogle Scholar
Corso, Phaedra S., Ingels, Justin B., Kogan, Steven M., Foster, E. Michael, Chen, Yi-Fu & Brody, H. Gene (2013). Economic Analysis of a Multi-Site Prevention Program: Assessment of Program Costs and Characterizing Site-Level Variability. Prevention Science, 14(5), 447456.CrossRefGoogle ScholarPubMed
Crowley, D. Max (2014). The Role of Social Impact Bonds in Pediatric Health Care. Pediatrics, 134(2), e331e333.CrossRefGoogle ScholarPubMed
Crowley, D. Max, Jones, Damon E., Greenberg, Max T., Feinberg, Mark E. & Spoth, Richard L. (2012). Resource Consumption of a Diffusion Model for Prevention Programs: The PROSPER Delivery System. Journal of Adolescent Health, 50(3), 256263.CrossRefGoogle ScholarPubMed
Fraas, Art & Lutter, Randall (2011). The Challenges of Improving the Economic Analysis of Pending Regulations: The Experience of OMB Circular A-4. Annual Review of Resource Economics, 3(1), 7185.CrossRefGoogle Scholar
Galloway, Ian (2014). Using Pay-for-Success to Increase Investment in the Nonmedical Determinants of Health. Health Affairs, 33(11), 18971904.CrossRefGoogle ScholarPubMed
Gottfredson, Denise C., Cook, Thomas D., Gardner, Frances E., Gorman-Smith, Deborah, Howe, George W., Sandler, Irwin N. & Zafft, Kathryn M.(2015). Standards of Evidence for Efficacy, Effectiveness, and Scale-up Research in Prevention Science: Next Generation. Prevention Science, 1–34.CrossRefGoogle Scholar
Greenblatt, Jonathan(2014). Pay for Success: Spreading What Works and Sharing What We Know Across the U.S. https://www.whitehouse.gov/blog/2014/12/04/pay-success-spreading-what-works-and-sharing-what-we-know-across-us.Google Scholar
Greenblatt, Jonathan & Donovan, Annie (2013). The Promise of Pay for Success. Community Investment Funding Review, 9(1), 1922.Google Scholar
Hahn, Robert W. & Dudley, Patrick M. (2007). How Well Does the US Government Do Benefit-Cost Analysis? Review of Environmental Economics and Policy, 1(2), 192211.CrossRefGoogle Scholar
Haskins, Ron & Margolis, Greg (2014). Show Me the Evidence: Obama’s Fight for Rigor and Results in Social Policy. Washington, DC: Brookings Institution Press.Google Scholar
Heckman, James J., Humphries, John Eric, Veramendi, Greg & Urzúa, Sergio S.(2014). Education, Health and Wages, NBER Working Paper No. 19971.Google Scholar
Herman, Patricia M., Mahrer, Nicole E., Wolchik, Sharlene A., Porter, Michele M., Jones, Sarah & Sandler, Irwin N. (2015). Cost-Benefit Analysis of a Preventive Intervention for Divorced Families: Reduction in Mental Health and Justice System Service Use Costs 15 Years Later. Prevention Science, 16(4), 586596.CrossRefGoogle ScholarPubMed
Institute of Medicine. 2014. The Use of Economic Evidence to Inform Investments in Children, Youth, and Families. http://iom.nationalacademies.org/Activities/Children/EconomicEvidence.aspx.Google Scholar
Jones, Damon, Karoly, Lynn, Crowley, Daniel & Greenberg, Mark T. (2015). Considering Valuation of Non-Cognitive Skills in Benefit-Cost Analysis of Programs for Children. Journal of Benefit-Cost Analysis, 6(3), 471507.CrossRefGoogle Scholar
Karoly, Lynn A. (2012). Toward Standardization of Benefit-Cost Analysis of Early Childhood Interventions. Journal of Benefit-Cost Analysis, 3(01), 145.CrossRefGoogle Scholar
Karoly, Lynn A., Kilburn, M. Rebecca & Cannon, Jill S.(2006). Early Childhood Intervention: Proven Results, Future Promise. RAND Corporation.Google Scholar
Kuklinski, Margaret R., Briney, John S., Hawkins, J. David & Catalano, Richard F. (2012). Cost-Benefit Analysis of Communities That Care Outcomes at Eighth Grade. Prevention Science, 13(2), 150161.CrossRefGoogle ScholarPubMed
Kuklinski, Margaret R., Fagan, Abigail A., Hawkins, J. David, Briney, John S. & Catalano, Richard F.(2015). Benefit–Cost Analysis of a Randomized Evaluation of Communities That Care: Monetizing Intervention Effects on the Initiation of Delinquency and Substance use through Grade 12. Journal of Experimental Criminology, 1–28.CrossRefGoogle Scholar
Lee, Stephanie & Aos, Steve (2011). Using Cost–Benefit Analysis to Understand the Value of Social Interventions. Research on Social Work Practice, doi:1049731511410551.CrossRefGoogle Scholar
Levin, Henry M. & McEwan, Patrick J. (2001). Cost-Effectiveness Analysis: Methods and Applications (Vol. 4). Thousand Oaks, CA: Sage.Google Scholar
Long, Katherine, Brown, Joshua L., Jones, Stephanie M., Aber, J. Lawrence & Yates, Brian T. (2015). Cost Analysis of a School-Based Social and Emotional Learning and Literacy Intervention. Journal of Benefit-Cost Analysis, 6(3), 545571.CrossRefGoogle Scholar
Masse, Leonard N. & Barnett, Steven W. (2002). A Benefit-Cost Analysis of the Abecedarian Early Childhood Intervention. In Cost-Effectiveness and Educational Policy (pp. 157173). Larchmont, NY: Eye on Education, Inc. Google Scholar
Miller, Ted R. & Hendrie, Delia(2009). Substance Abuse Prevention Dollars and Cents: A Cost-benefit Analysis. Washington, DC: U.S. Department of Health and Human Services, Substance Abuse and Mental Health Services Administration, Center for Substance Abuse Prevention.Google Scholar
Oliver, Kathryn, Innvar, Simon, Lorenc, Theo, Woodman, Jenny & James, Thomas (2014). A Systematic Review of Barriers to and Facilitators of the use of Evidence by Policymakers. BMC Health Services Research, 14(2), doi:10.1186/1472-6963-14-2.CrossRefGoogle ScholarPubMed
Olson, Steve & Bogard, Kimber(Eds.) (2014). Considerations in Applying Benefit-Cost Analysis to Preventive Interventions for Children, Youth, and Families: Workshop Summary. Washington, DC: National Academies Press.Google Scholar
Office of Management and Budget (1992). Circular No. A-94. www.whitehouse.gov/omb/circulars_a094#1.Google Scholar
Perna, Laura W. (2015). Guest Editor’s Column: Preparing for Reauthorization of the Higher Education Act. Journal of Student Financial Aid, 45(3), 1.CrossRefGoogle Scholar
Plotnick, Robert D., Young, Diane S., Catalano, Richard F. & Haggerty, Kevin P. (1998). Benefits and Costs of a Family-Focused Methadone Treatment and Drug Abuse Prevention Program: Preliminary Findings. In W. J., Bukoski & Evans, R. I. (Eds.), Cost Benefit/Cost Effectiveness Research of Drug Abuse Prevention: Implications for Programming and Policy (NIDA Research Monograph,Vol. 176, pp. 161183). Rockville, MD: National Institute on Drug Abuse.Google Scholar
Reynolds, Arthur J., Temple, Judy A., White, Barry A., Ou, Suh-Ruu & Robertson, Dylan L. (2011). Age 26 Cost–Benefit Analysis of the Child-Parent Center Early Education Program. Child development, 82(1), 379404.CrossRefGoogle ScholarPubMed
Sackett, David L., Rosenberg, William M. C., Gray, J. A. Muir, Haynes, R. Brian & Richardson, W. Scott (1996). Evidence Based Medicine: What It is and What It Isn’t. BMJ: British Medical Journal, 312(7023), 71.CrossRefGoogle ScholarPubMed
Prewitt, Kenneth, Schwandt, Thomas A. & Straf, Miron L.(Eds.) (2012). Using Science as Evidence in Public Policy. National Academies Press.Google Scholar
Schweinhart, Lawrence J., Montie, Jeanne, Xiang, Zongping, Barnettc, W. Steven, Belfield, Clive R. & Nores, Milagros (2005). Lifetime Effects: The High/Scope Perry Preschool Study through Age 40. Monographs of the HighScope Educational Research Foundation 14. Ypsilanti, MI: HighScope Press.Google Scholar
Siu, Albert L., Bibbins-Domingo, Kirsten & Grossman, David(2015). Evidence-Based Clinical Prevention in the Era of the Patient Protection and Affordable Care Act: The Role of the US Preventive Services Task Force. Journal of the American Medical Association, 1–3.Google Scholar
Society for Prevention Research (2014). Mapping Advances in Prevention Science Task Force III: Cost-Benefit Analysis. http://www.preventionresearch.org/about-spr/committees-and-task-forces/.Google Scholar
Spoth, Richard L., Guyll, Max & Day, Susan X. (2002). Universal Family-Focused Interventions in Alcohol-Use Disorder Prevention: Cost Effectiveness and Cost-Benefit Analyses of two Interventions. Journal of Studies on Alcohol, 63(2), 219228.CrossRefGoogle ScholarPubMed
Tanenbaum, Sandra J. (2005). Evidence-Based Practice as Mental Health Policy: Three Controversies and a Caveat. Health Affairs, 24(1), 163173.CrossRefGoogle Scholar
Temple, Judy & Reynolds, Arthur (2015). Using Cost-Benefit Analysis to Scale up Early Childhood Programs Through Pay for Success Financing. Journal of Benefit-Cost Analysis, 6(3), 628653.CrossRefGoogle ScholarPubMed
Trostel, Philip A. (2010). The Fiscal Impacts of College Attainment. Research in Higher Education, 51(3), 220247.CrossRefGoogle Scholar
Trupin, Eric & Kerns, Suzanne(2015). Introduction to the Special Issue: Legislation Related to Children’s Evidence-Based Practice. Administration and Policy in Mental Health and Mental Health Services Research, 1–5.Google Scholar
White, Darcy & VanLandingham, Gary (2015). Benefit-Cost Analysis in the States: Status, Impact, and Challenges. Journal of Benefit-Cost Analysis, 6(2), 369399.CrossRefGoogle Scholar
Vining, Aidan & Weimer, David L. (2010). An Assessment of Important Issues Concerning the Application of Benefit-Cost Analysis to Social Policy. Journal of Benefit-Cost Analysis, 1(01), 140.CrossRefGoogle Scholar
Villar, Anthony & Strong, Michael (2007). Is Mentoring Worth the Money? A Benefit-Cost Analysis and Five-Year Rate of Return of a Comprehensive Mentoring Program for Beginning Teachers. ERS Spectrum, 25(3), 117.Google Scholar
Warner, Mildred E. (2013). Private Finance for Public Goods: Social Impact Bonds. Journal of Economic Policy Reform, 16(4), 303319.CrossRefGoogle Scholar
Washington State Institute for Public Policy (2015). Updated Inventory of Evidence-Based, Research-Based, and Promising Practices for Prevention and Intervention Services for Children and Juveniles in Child Welfare, Juvenile Justice, and Mental Health Systems. Seattle, WA: Evidence-based Practice Institute & Washington State Institute for Public Policy. Document No. E2SHB2536-6.Google Scholar
Yates, Brian T. (2009). Cost-inclusive evaluation: A banquet of approaches for including costs, benefits, and cost–effectiveness and cost–benefit analyses in your next evaluation. Evaluation and Program Planning, 32(1), 5254.CrossRefGoogle ScholarPubMed
Zerbe, Richard O., Plotnick, Robert D., Kessler, Ronald C., Pecora, Peter J., Hiripi, Eva, O’Brien, Kirk, Williams, Jason, English, Diana & White, J. (2009). Benefits and Costs of Intensive Foster Care Services: The Casey Family Programs Compared to State Services. Contemporary Economic Policy, 27(3), 308320.CrossRefGoogle Scholar