The reasons for why behavioural change interventions keep failing are multifaceted, and this is an important motif that runs through this commentary, and less so in the target article. The diagnosis it offers as to why behavioural change interventions are doomed to fail is that behavioural scientists are focusing on the wrong unit of analysis. Just like economists and social workers do, we first need to acknowledge micro (individual – or “i-frame”), mezzo (group), and macro (population – or “s-frame”) level differences in behaviour. By shifting away from micro straight to macro level we have a better chance of unlocking the potential of behavioural change interventions, and at the same time avoid doing the bidding of private sector organisations.
First, researchers had already highlighted the serious problems involved in fixating narrowly on fitting an intervention to a target behaviour while neglecting the wider context where both are couched in (Meder, Fleischhut, & Osman, Reference Meder, Fleischhut and Osman2018). This is also where we begin to understand that a thorough diagnosis of failure requires a multidisciplinary approach.
Second, by focusing on where successes lie, we focus less on how they fail, how often they fail, and where they fail (Hummel & Maedche, Reference Hummel and Maedche2019; Osman et al., Reference Osman, McLachlan, Fenton, Neil, Löfstedt and Meder2020). By making inroads to classifying the many types of failures that have been documented (Osman et al., Reference Osman, McLachlan, Fenton, Neil, Löfstedt and Meder2020), we can start to address these outstanding issues. Moreover, by doing this we can open up opportunities to work with decision sciences, data scientists, and social scientists to understand and explain why behavioural change interventions fail when they do, and what success realistically looks like (Cartwright & Hardie, Reference Cartwright and Hardie2012). A unifying causal analytic approach can help to build theories and new empirical practices (Bryan, Tipton, & Yeager, Reference Bryan, Tipton and Yeager2021; Osman et al., Reference Osman, McLachlan, Fenton, Neil, Löfstedt and Meder2020) that can uncover which combinations of interventions can work (e.g., Osman et al., Reference Osman, McLachlan, Fenton, Neil, Löfstedt and Meder2020).
Third, because we are offering practical solutions to public policy problems, such as those offered in Tables 1 and 2 of the target article, as applied by behavioural scientists, we confront the world of policy making. Maintaining a naïve understanding of the science–policy interface, where accessibility of evidence is viewed as a key to successful implementation (Reichmann & Wieser, Reference Reichmann and Wieser2022) is a considerable barrier to estimating realistic success rates of behavioural change interventions. We might think that the use of evidence works through what is often referred to as the policy cycle – agenda setting, policy formation, decision making, policy implementation, and policy evaluation (Lasswell, Reference Lasswell1956). But public policy, public administration, and political science research show that this is ideal, and that there are at least six different competing characterisations of the policy-making process, and in each the uptake of scientific evidence is far from linear (Cairney, Reference Cairney2020). So, to inform public and social policy making, behavioural scientists need to at least acknowledge the considerations of the policy issues that need addressing from the perspective of those that are likely to be implementing the behavioural interventions.
Scientific progress depends on acknowledging failure, and the target article is an honest account of the limitations of past efforts to achieve behavioural change. However, viable solutions will depend on an accurate characterisation of the aetiology of the failings, along with a new theoretical account that sets the foundations for new theorising and empirical investigations.
The reasons for why behavioural change interventions keep failing are multifaceted, and this is an important motif that runs through this commentary, and less so in the target article. The diagnosis it offers as to why behavioural change interventions are doomed to fail is that behavioural scientists are focusing on the wrong unit of analysis. Just like economists and social workers do, we first need to acknowledge micro (individual – or “i-frame”), mezzo (group), and macro (population – or “s-frame”) level differences in behaviour. By shifting away from micro straight to macro level we have a better chance of unlocking the potential of behavioural change interventions, and at the same time avoid doing the bidding of private sector organisations.
First, researchers had already highlighted the serious problems involved in fixating narrowly on fitting an intervention to a target behaviour while neglecting the wider context where both are couched in (Meder, Fleischhut, & Osman, Reference Meder, Fleischhut and Osman2018). This is also where we begin to understand that a thorough diagnosis of failure requires a multidisciplinary approach.
Second, by focusing on where successes lie, we focus less on how they fail, how often they fail, and where they fail (Hummel & Maedche, Reference Hummel and Maedche2019; Osman et al., Reference Osman, McLachlan, Fenton, Neil, Löfstedt and Meder2020). By making inroads to classifying the many types of failures that have been documented (Osman et al., Reference Osman, McLachlan, Fenton, Neil, Löfstedt and Meder2020), we can start to address these outstanding issues. Moreover, by doing this we can open up opportunities to work with decision sciences, data scientists, and social scientists to understand and explain why behavioural change interventions fail when they do, and what success realistically looks like (Cartwright & Hardie, Reference Cartwright and Hardie2012). A unifying causal analytic approach can help to build theories and new empirical practices (Bryan, Tipton, & Yeager, Reference Bryan, Tipton and Yeager2021; Osman et al., Reference Osman, McLachlan, Fenton, Neil, Löfstedt and Meder2020) that can uncover which combinations of interventions can work (e.g., Osman et al., Reference Osman, McLachlan, Fenton, Neil, Löfstedt and Meder2020).
Third, because we are offering practical solutions to public policy problems, such as those offered in Tables 1 and 2 of the target article, as applied by behavioural scientists, we confront the world of policy making. Maintaining a naïve understanding of the science–policy interface, where accessibility of evidence is viewed as a key to successful implementation (Reichmann & Wieser, Reference Reichmann and Wieser2022) is a considerable barrier to estimating realistic success rates of behavioural change interventions. We might think that the use of evidence works through what is often referred to as the policy cycle – agenda setting, policy formation, decision making, policy implementation, and policy evaluation (Lasswell, Reference Lasswell1956). But public policy, public administration, and political science research show that this is ideal, and that there are at least six different competing characterisations of the policy-making process, and in each the uptake of scientific evidence is far from linear (Cairney, Reference Cairney2020). So, to inform public and social policy making, behavioural scientists need to at least acknowledge the considerations of the policy issues that need addressing from the perspective of those that are likely to be implementing the behavioural interventions.
Scientific progress depends on acknowledging failure, and the target article is an honest account of the limitations of past efforts to achieve behavioural change. However, viable solutions will depend on an accurate characterisation of the aetiology of the failings, along with a new theoretical account that sets the foundations for new theorising and empirical investigations.
Competing interest
None.