Hostname: page-component-78c5997874-v9fdk Total loading time: 0 Render date: 2024-11-14T01:28:44.357Z Has data issue: false hasContentIssue false

Digital Social Policy: Past, Present, Future

Published online by Cambridge University Press:  17 March 2022

PAUL W. FAY HENMAN*
Affiliation:
Professor of Digital Sociology and Social Policy, Chief Investigator, ARC Centre of Excellence for Automated Decision Making & Society, School of Social Science, University of Queensland QLD 4072
*
Corresponding author, email: [email protected]
Rights & Permissions [Opens in a new window]

Extract

We undoubtably live in a digitally infused world. From government administrative processes to financial transactions and social media posts, digital technologies automatically collect, collate, combine and circulate digital traces of our actions and thoughts, which are in turn used to construct digital personas of us. More significantly, government decisions are increasingly automated with real world effect; companies subvert human workers to automated processes; while social media algorithms prioritise outrage and ‘fake news’ with destabilizing and devastating effects for public trust in social institutions. Accordingly, what it means to be a person, a citizen, and a consumer, and what constitutes society and the economy in the 21st century is profoundly different to that in the 20th century.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press

1. Introduction

We undoubtably live in a digitally infused world. From government administrative processes to financial transactions and social media posts, digital technologies automatically collect, collate, combine and circulate digital traces of our actions and thoughts, which are in turn used to construct digital personas of us. More significantly, government decisions are increasingly automated with real world effect; companies subvert human workers to automated processes; while social media algorithms prioritise outrage and ‘fake news’ with destabilizing and devastating effects for public trust in social institutions. Accordingly, what it means to be a person, a citizen, and a consumer, and what constitutes society and the economy in the 21st century is profoundly different to that in the 20th century.

It therefore is somewhat surprising that digital technology has been largely absent in social policy research. In its 50-year history the Journal of Social Policy has published only two research papers with the word ‘digital’ in its title, abstract, or key words. Indeed, as at the end of 2021 in the top four social policy journals – Journal of Social Policy (established 1972), Social Policy and Administration (established 1967), Critical Social Policy (established 1981), and Social Policy and Society (established 2002) – only 11 research papers list ‘digital’ in their title, abstract or key words, and only 22 use ‘digital’, ‘computer’, ‘automation’, ‘electronic’ or ‘ICT’ (see Figure 1).Footnote 1

FIGURE 1. Publications published with ‘digital’, ‘electronic’, ‘automation’, ‘ICT’, or ‘computer’ in title, abstract or keywords, by decade

Across all four journals, the first was published in 1977, by Adler and Du Feu examining computer-based welfare benefits information systems (1977), with the next not published until 20 years later (Henman, Reference Henman1997), which reported on the role of computers in policy processes in Australia’s social security system. As illustrated in Figure 1, it was not until this millennium that digital technology started stimulating much interest, but still remains marginal, constituting less than one half a percent of all papers published in each journal, rising to about one percent over the last twenty years.

Digital technology not only relates to the present. The implications of digital technologies over the past 50 years have been profound, yet largely absent from social policy research. One explanation for this absence in social policy research may be that technology is viewed as part of the government administrative practices, or public administration, which is often considered empirically and conceptually separate to the substance of (social) policy. Yet public administration scholars have also lamented the absence of digital technology in their discipline (Meijer, 2007; Pollitt, Reference Pollitt2012). This generalised absence points to another possible explanation, that the social sciences more broadly have a blind spot when considering the role of nonhumans in the social world. Indeed, Law (1991) argued that technology appears to exist as ‘monsters’ in the social sciences; seen as unusual and exotic, but not significant in understanding society and social dynamics. Such ontological explanations may be coupled with an expertise limitation. Social scientists, including social policy academics, do not typically have much training or expertise in technical matters, and may feel ill-equipped to examine them.

Regardless of the reasons, digital technologies are now constantly in our hands, touching our fingers, and operating 24/7 around us, and the operation of social policy is not exempt. It is therefore essential to bring the ‘digital’ into ‘social policy’. The key purpose of this paper, therefore, is to provide a primer, map or navigational tool, to apprehend the ways in which digital technologies have come into, shaped and transformed, and are redefining social policy and its administration. Such an intellectual capacity is necessary to more fully consider the ways in which digital technology through social policy (re-)structures society, treats citizens and service users, and (re-)distributes resources and dis/advantages.

To this end, section two provides an empirical overview of the evolution of digital technologies in social policy processes. It highlights how these past, present and emerging technological developments have been and continue to be key to major social policy and administration transformations. Section three then explores the policy, legal, ethical, and power dimensions of these changes, and outlines responses to them. In section four the paper suggests both conceptual and methodological skills that can enable social policy researchers to better engage with and through digital technologies. The conclusion explores the key implications of the paper for future social policy research.

2. Digital technologies in the evolution of social policy and administration

Shortly after World War II, governments around the world started adopting new electronic computers to support the administration of social policy. In its 1959 report, the British Ministry for Pensions and National Insurance included a photograph of its newly installed “Automatic Data Processing equipment” to “reduce staff costs” by handling staff payroll and production of statistics (1960). Across the hemisphere, in 1967 the Australian Department of Social Security began electronic payments to pensioners (Department of Social Security, 1968). In the UK, local government authorities began computerising their social service records in the early 1970s (Lingham and Law, Reference Lingham and Law1989), a similar timing to that in the USA where Boyd et al. (Reference Boyd, Hylton and Price1978) found computerisation in local governments was primarily introduced for administrative purposes, rather than education or direct service provision.

Over the next 40 years computers grew to become the backbone to social policy administration and practice. With back-office computers connected to online websites and smart phone apps, people can apply for benefits and services 24/7 and report to government to ensure ongoing compliance. Concurrently, computers constantly assess incoming data and match these with other government and non-government (e.g. banks, social media) datasets to ensure compliance, which in turn automates identity and eligibility assessments, suspension and cancelation of benefits or services, and issuance of penalties and debts (Eubanks, Reference Eubanks2018, ch. 2; Braithwaite, Reference Braithwaite2020). Clients can increasingly engage with governments through chatbots and online services (Henman, Reference Henman2019), sometimes with automated voice, fingerprint or facial recognition systems. Data analytics and machine learning (a.k.a. artificial intelligence (AI)) increasingly enable governments to profile citizens, giving differentiated, personalised or ‘special’ treatment to beneficiaries, children and adults ‘at risk’, or (potential) offenders (Gillingham, Reference Gillingham2017; Kehl and Kessler, Reference Kehl and Kessler2017; Desiere and Struyven, Reference Desiere and Struyven2020). Web-linked digital technologies are used to remotely monitor the sick, and robots provide companions for those alone (Majumder, Mondal, and Deen, Reference Majumder, Mondal and Deen2017; Vandemeulebroucke, de Casterlé, and Gastmans, Reference Vandemeulebroucke, de Casterlé and Gastmans2018).Footnote 2

Many justifications for digitisation have not changed – efficiency, cost-cutting, staff-savings, consistency of decisions, and reducing errors. Over time new justifications include policy responsiveness and agility, customer service and service innovation, personalisation, overpayment and fraud detection, enhanced governance, and improved accountability and democracy (e.g. OECD, 2016). While productivity has increased overall, this does not necessary equate to cost-savings (Henman, Reference Henman1996), and legacy and complex computer systems have at times also reduced policy responsiveness. At the same time there have been a string of major ICT disasters (Goldfinch, Reference Goldfinch2007), with the UK’s Universal Credit system a prominent recent example (Evenstad, Reference Evenstad2020).

Over these decades some broad trajectories can be discerned in digitally enabled social policy administration and delivery, the substance of social policy, and the governance of social policy.

First, in the administration and delivery of social policy, computerisation was first used to automate routine, well-defined activities, such as keeping databases of individuals’ national insurance instalments, payment of benefits, production of statistics, and calculation of benefits. Increasingly, automation extended to new areas of activity and enabled new forms of social policy administration. What were previously regarded as non-routine decisions requiring professional or administrative judgement have been supplemented and supplanted by Decision Support Systems, Expert Systems, and more recently AI. Bovens and Zouridis (Reference Bovens and Zouridis2002) have characterised an ongoing shift from street-level, to screen-level, to system-level bureaucracy as computers become more central to front-line operation and then systemically automated operations. These developments have led to charges of deskilling (Karger, Reference Karger1986) and reductions of administrative and professional discretion (Høybye-Mortensen, Reference Høybye-Mortensen2013; Zouridis, Van Eck, and Bovens, Reference Zouridis, Van Eck and Bovens2020). Technology cuts different ways, and while codification of rules and laws limits discretion, it has also importantly helped to clarify citizens’ eligibility and social rights. Alternative models of computerisation – to help support government officials, rather than to automate them – have been observed, particularly in the Scandinavian welfare states – though the distance between administrators and citizens seems to have increased regardless (Snellen and Wyatt, Reference Snellen and Wyatt1992; Adler and Henman, Reference Adler and Henman2009). Digital technologies have also seen a spatial and geographical decoupling of social administration. With networked computers, telephone centres become possible, and with the internet and online computers, websites and smart phone apps are introduced, thereby shifting from a 9-to-5 bricks-and-mortar administration to 24/7 operations. Digital data networks have also facilitated the outsourcing of social services to non-government and commercial agencies. Overall, Dunleavy et al. (Reference Dunleavy, Margetts, Bastow and Tinkler2006) have articulated a ‘digital era governance’ replacing new public management rationalities in order to better enact personalised, whole-of-government approaches.

Second, digital technologies have provided the mechanisms for changes in the substance of social policy. In addition to the shift to more codified social policy, policy and its delivery have become much more differentiated, individualised, personalised – for example, creating different payment rates for different sub-populations, geographical areas, or risk/need profiles (Henman, Reference Henman2010). Instead of universal ‘one-size-fits-all’, policies have been able to be more nuanced, to better respond to human diversity. What was enabled by administrative or professional discretion becomes codified into complex algorithms. Networked computer systems have also increasingly supported a growing conditionality of social policy, by making eligibility to certain services and benefits conditional on circumstances or behaviours evidenced in digital databases. Consequently, conditionality of social policy has increasingly joined up two separate policy areas and cross-cut different policy objectives, such as removing child benefit from parents whose children are truanting or not immunised (Henman, Reference Henman2011). Both differentiation and conditionality in social policy increases its complexity with implications for citizen access and accountability (Henman, Reference Henman2010). Computer modelling and simulation tools have also supported the development of such complex policies and enhanced policy makers’ capacity to create more nuanced, farseeing, and far-reaching policies (Harding, Reference Harding2017). Over time, social service agencies have amassed enormous digital administrative datasets, of ‘big data’ (Mayer-Schönberger and Cukier, Reference Mayer-Schönberger and Cukier2013). Using data analytical techniques, these datasets are being used to shape social policies, such as New Zealand’s social investment approach, which involves using actuarial approaches to target decisions and inform directions in social security, care and protection of children and delivery of social services (O’Brien, Reference O’Brien2020). Finally, social policy itself has considered how digital technologies (re-)produces and reinforces social disadvantage where, under the nomenclature of the ‘digital divide’, people without access or ability to use digital technologies (e.g. computers, internet, smart phones), people who are typically already disadvantaged, are further excluded from full participation in society (Notley and Foth, Reference Notley and Foth2008; Kim, Lee, and Menon, Reference Kim, Lee and Menon2009).

Third, the governance of social policy has transformed as a result of digital technologies. As Bellamy and Taylor (Reference Bellamy and Taylor1998) observe, computerisation is as much about automation as it is about informatisation; namely, the production of data, information, and in turn knowledge. Such knowledge is increasingly central to the governance of social policy for: operational management; understanding citizens’ needs and trends; and reflecting on and revising social policy. Digital data and algorithmic decision making also transfigures accountability processes. In administrative review and appeals, digital data can ostensibly provide objective traces of administrative transactions, computer algorithms can provide explanations for decisions, and computer code can be external audited (Adler and Henman, Reference Adler and Henman2009). Considerations of block chains to enhance these processes are being explored (Berryhill, Bourgery, and Hanson, Reference Berryhill, Bourgery and Hanson2018). Countering these, cultural attitudes that ‘the computer is correct’, a lack of administrative openness, and the complexity of algorithms (made worse with recent moves into machine learning) can undermine administrative justice processes and outcomes (Henman, Reference Henman, Joe Tomlinson, Hertogh and Kirkham2021).

3. Implications of digital social policy

Recent high-profile controversies in social policy have highlighted considerable policy, legal, ethical, political, and power issues of digital social policy. Such controversies include: England’s OfQual algorithm of 2020 (Kelly, Reference Kelly2021); Australia’s Online Compliance Intervention (OCI) system, colloquially called ‘robodebt’ (Carney, Reference Carney2019; Mann, Reference Mann2020); the illegal Dutch SyRI social benefit fraud system (Bekker, Reference Bekker2021); the use of COMPAS in USA criminal justice systems for parole and sentencing decisions (Kehl and Kessler, Reference Kehl and Kessler2017; Hannah-Moffat, Reference Hannah-Moffat2019; Hartmann and Wenzelburger, Reference Hartmann and Wenzelburger2021); Alleghany County’s Family Screening (Vaithianathan et al., Reference Vaithianathan, Putnam-Hornstein, Jiang, Nand and Maloney2017; Eubanks, Reference Eubanks2018, ch. 5); China’s social credit system (Dai, Reference Dai2018); and USA’s Medicaid’s Electronic Visiting Verification (EVV) system for carers of people with disability (Mateescu, Reference Mateescu2021). These examples illustrate some key issues arising from digital social policy.

Digital technology often enhances state surveillance and control. Digital social policy may be designed and deplored to reinforce political agendas and rationalities (Graham and Wood, Reference Graham and Wood2003; Benjamin, Reference Benjamin2019). This is particularly pertinent in much social policy, where the focus is on disadvantaged or marginal peoples within a historical system of negative social valorisation and state control. Indeed, the above examples of Australia’s robodebt, the Dutch SyRI system, and USA’s EVV system, are premised on suspicion of welfare recipients as fraudsters resulting in unethical and often illegal curtailing and cessation of social benefits and rights.

Computer algorithms and big data are culturally constructed as accurate, objective and true (Holbrook, Reference Holbrook1986), which can undermine critical appraisal of digital social policy and administrative decisions, and therefore reduce government accountability. The growing amounts of data collected and their submission by citizens has also placed greater administrative burdens (Herd and Moynihan, Reference Herd and Moynihan2019) on social policy recipients, that can reproduce social divisions in welfare (Henman and Marston, Reference Henman and Marston2008). Taken together these two dynamics can implicitly and explicitly lead to what Lipsky (Reference Lipsky1984) called ‘bureaucratic disentitlement’, or what might today be renamed as ‘algorithmic disentitlement’, when in the words of BBC’s TV series Little Britain, ‘the computer says “No”’. Again, all the cases above demonstrate the difficulty in understanding how and why algorithmic decisions are made, and the significant challenges in contesting and overturning them, often requiring major, concerted legal and political interventions.

The rise of big data, data analytics and machine learning have accelerated several concerns. Predictive or risk assessments for profiling generate significant policy, legal and ethical concerns about treating people differently on the basis of possibilities or calculated futures, not actualities (Henman, Reference Henman2005). In the USA, such approaches have been argued to breach the fourth amendment prohibiting unreasonable searches and seizures (Slobogin, Reference Slobogin2008). Concerns also arise about bias of data and algorithms, treating people differently based on protected characteristics such as race/ethnicity, gender, sexuality, and religion. Racially biased algorithms are particularly evident in the COMPAS profiling system (Washington, Reference Washington2018), which arises from the use of historical crime data generated within racially policing to build such systems.

The drift to more automated social policy heightens concerns about its ‘black box’ or opaque nature (Pasquale, Reference Pasquale2015), thereby reducing accountability and fairness. In response, there is significant technical work on building algorithms to purportedly achieve fairness and transparency, including via making algorithmic decisions explainable (https://www.fatml.org/). These developments have also called for continued human oversight. While professional discretion was a way to ensure people were considered according to their individualities, recognising that ‘one-size-does-not-fit-all’, with the complex differentiated algorithms (machine learning as the ultimate way to do this), we are now learning that ‘one-algorithm-does-not-fit-all’. We potentially need new ways to augment digitally-enacted with human-enacted social policy.

Many of these challenges of digital social policy have been flagged over the last four decades, but they have largely remained at the fringe of social policy, public administration, and legal considerations. Fortuitously, the development of machine learning (under the marketing banner of ‘AI’) has stimulated much interest into the ethical, legal, and human rights dimensions of the use of AI in government. Multiple reports by governments, think-tanks, research institutes and corporations have charted the broad issues (see Alston, Reference Alston2019 for a focus on welfare states). Fjeld et al. (Reference Fjeld, Achten, Hilligoss, Nagy and Srikumar2020) have helpfully summarised these reports and identified eight major areas for consideration: privacy; accountability; safety and security; transparency and explainability; fairness and non-discrimination; human control of technology; professional responsibility; and promotion of human values. The current agenda is to provide policy, legal and regulatory responses to address them. Emerging policy and legal responses to these challenges are discussed in this paper’s conclusion.

4. Conceptual and methodological innovations for digital social policy

In addition to greater empirical knowledge of digital technologies in social policy, critical digital social policy research requires both conceptual and methodological innovations. Conceptually, four key areas are canvassed.

First, a digital social policy sub-discipline necessitates an ontology that incorporates digital technology into its remit. Social policy cannot be solely a study of people and institutions, but must recognise the real ways digital (and other) technologies shape and enact social policy and its affects. Latour (Reference Latour, Bijker and Law1992) makes this point in referring to technologies (and nonhumans) as the ‘missing masses’ in understanding society. Such an ontology appreciates the ways in which both the social shapes the technological, and the technological shapes the social, thereby avoiding simplistic technological or social deterministic accounts. Like all socio-technical innovations, there are both new opportunities and forms of knowledge and action, alongside closures of same. There is a range of social theoretical approaches that can grapple with these ontological considerations, particularly in Science and Technology Studies (Fuchs, Reference Fuchs2007; Matthewman, Reference Matthewman2011). I have found Actor Network Theory (ANT) to be most helpful (Callon, 1986; Law, Reference Law1992; Callon, Reference Callon2001; Latour, Reference Latour2005) as it takes seriously the materiality of our world. Arising within and partly in response to a period of hyper social constructivism, the re-discovery of materiality is crucial for digital social policy, even if the operations of digital technology can seem quite immaterial and ephemeral. New philosophical approaches to materialism make up a key plank in this thinking (Verbeek, Reference Verbeek2005; Ihde, Reference Ihde2012).

Second, once taking seriously the materiality of technology (and social policy), the conceptual challenge is to understand how this materiality is shaped and how it shapes us, in a way that is not deterministic. Here, the concept of affordance is key to this analytical work (Davis and Chouinard, Reference Davis and Chouinard2016). Davis (Reference Davis2020), for example, examines the ways in which artifacts request, demand, allow, encourage, discourage, and refuse. Think of how algorithms determine eligibility to services or cut off benefits. Computer databases also structure the type of data that is collected and thus enable and constrain the nature of knowledge that can be known with them (Henman, Reference Henman1995). Computers also make instantaneous calculations and circulate data across networks at close to the speed of light (Castells, Reference Castells1996), and support the easy circulation and reproduction of digital data.

Third, a basic working knowledge and critical understanding of both digital data and algorithms is required. There are now emerging areas of critical data studies (Kitchin and Lauriault, Reference Kitchin and Lauriault2014; Iliadis and Russo, Reference Iliadis and Russo2016) and critical algorithmic studies (https://socialmediacollective.org/reading-lists/critical-algorithm-studies/). These bodies of work highlight the way in which both digital data and algorithms are not ontologically, ethically, politically or socially neutral. They are created by humans (typically white, male, educated ICT professionals) who consciously or unconsciously embed their own ways of thinking and/or visions about how the world works and what forms of knowledge are constructed and important (Winner, Reference Winner1980; Sandvig et al., Reference Sandvig, Hamilton, Karahalios and Langbort2016; Ruppert et al., Reference Ruppert, Isin and Bigo2017). Most acutely, given the current frenzy in global techno-political debates, a detailed and critical understanding of ‘Artificial Intelligence’ and machine learning is also required (Taulli and Oni, Reference Taulli and Oni2019).

Four, a critical digital social policy must take account of the nature and practice of power in a digital world. Traditionally, most critical approaches to digital technology have focused on operations of surveillance (Lyon, Reference Lyon2006). Theoretically grounded approaches to power include those drawing on Marxist and Weberian traditions (Castells, Reference Castells2013; Schroeder, Reference Schroeder2018). With his broader conceptualisation of power – including its disciplinary, productive and capillary-like manifestations – Foucault’s work has stimulated a significant body of work. Drawing on Foucault’s concept of governmentality, several authors have sought to clarify how digital technology governs (Henman, Reference Henman2010, Reference Henman2013) and even enacts an ‘algorithmic governmentality’ (Rouvroy, Reference Rouvroy2011; Morison, Reference Morison2016; Rouvroy and Stiegler, Reference Rouvroy and Stiegler2016; Henman, Reference Henman2020). Such a mode of rule involves governing segmented peoples and populations differentially via profiling and anticipatory assessments of risk, danger, and prosperity. Political economy approaches to digital technology recognise the increasing role of global tech firms operating in highly intertwined contractual relationships with states, enacting surveillance capitalism (Zuboff, Reference Zuboff2019; Prainsack, Reference Prainsack2020).

Social policy research can also benefit from innovative digital data, research tools and methodologies. Digitisation of data and new data platforms and collection tools have expanded the range, variety, and volume of data with which to examine social policy. In addition to enormous government administrative digital data sets, digital data can be obtained from social media platforms, websites and digital collection tools. The widening diversity of digital data also provides the basis for new ways of interpreting social problems and creating policy solutions. For example, geo-coded data have enhanced our capacity to understand the geographical distribution of social issues and develop responses (Wise and Craglia, Reference Wise and Craglia2007; Ballas et al., Reference Ballas, Clarke, Franklin and Newing2018).

Under a broad umbrella of ‘computational social science’, digital research methodologies include social media analysis, text analytics, social network analysis and computer modelling (Cioffi-Revilla, Reference Cioffi-Revilla2014; Alvarez, Reference Alvarez2016).

Social media platforms provide spaces for digital ethnographies and analysis of posts. Scholars have studied people’s attitudes to social issues (Bright et al., Reference Bright, Margetts, Hale and Yasseri2014) or social policy, (Brooker et al., Reference Brooker, Vines, Barnett, Feltwell and Lawson2016), and fed these into policy decision making (Simonofski, Fink, and Burnay, Reference Simonofski, Fink and Burnay2021). Studying Twitter hashtags (#) has been particularly popular for understanding the politics of social issues (Carr and Cowan, Reference Carr and Cowan2016; Ince et al., Reference Ince, Rojas and Davis2017).

Computational text analysis provides the means by which large textual datasets can be analysed to apprehend the diversity of topics covered, including over time and space, such as media framing of refugees in Europe (Heidenreich et al., Reference Heidenreich, Lind, Eberl and Boomgaarden2019), comparing policy responses to COVID-19 (Goyal and Howlett, Reference Goyal and Howlett2021), understanding the environment-poverty nexus (Cheng et al., Reference Cheng, Shuai, Liu, Wang, Liu, Li and Shuai2018), or examining imagined versus real care relationships (Ludlow et al., Reference Ludlow, Pope, Bridges, Westbrook and Braithwaite2019).

Networks – constituted as connections between people, things, or ideas – provide alternative ways for understanding the social policy world. Supported with visualisation tools and network metrics, networks of policy and social service institutions, communities and collaboration (Devereaux et al., Reference Devereaux, Cukier, Ryan and Thomlinson2009; McNutt and Pal, Reference McNutt and Pal2011; Henman et al., Reference Henman, Ackland, Graham and Ionas2014; Henman and Foster, forthcoming) and social movements (Ackland and O’Neil, Reference Ackland and O’Neil2011) can be charted via scraping websites and social media platforms.

Online tracking tools have also been used for (quasi-)experiments to better understand how people respond to political and policy changes (Margetts, Reference Margetts2011), and to assess the usability of government websites for citizens finding information about public services (Henman and Graham, Reference Henman and Graham2018).

5. Concluding discussion

Given the growing entanglement of digital technologies in every aspect of our lives, social policy scholars must pay greater attention to them and their positive and negative contributions to social policy processes, including policy formation and enactment. Digital technologies should not only cause dread, but also spark opportunities to advance shared social policy objectives and values. As the range of dates of references included in this paper suggests, many of the realities and issues of digital technology in social policy are not new, with emerging technologies accompanying similar dynamics and challenges. To this end, in critically learning from our past I suggest the following four areas for particular focus in future social policy research.

First, co-design must be a central objective of digital social policy. Digital technologies are typically designed for the agendas of government agencies (and global technology and consultancy firms). Even when done with good intent they are designed with ‘imagined users’ that are often white, highly educated, middle class, mostly men, which reinforces social disadvantages (Benjamin, Reference Benjamin2019). Not only do multiple perspectives need to be involved in designing digital technology for social policy and social policy for digital technology, but social policy researchers and advocates need to engage with people in identifying digital technologies that address the needs of social policy recipientsFootnote 3 , just as persons with disability have long created alternative technologies that are centred on their experiences.

Building policy and legal innovations that steer digital technologies in human-centred ways is a second important task. The EU’s 2016 General Data Protection Regulation (GDPR) and 2021 proposed Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)Footnote 4 provide important legal frameworks that need wider consideration, implementation, and critique. Similarly, in 2019 the UK published A guide to using artificial intelligence in the public sector Footnote 5 . These provide important protections and frameworks that require continuing vigilance to ensure that governments’ digital actions comply with the intent of the legal frameworks. A further necessary protection is the ability to extend administrative review and appeals rights and practices beyond individual decisions to challenging the specifics of algorithms’ code which structurally generate problematic decisions. This might occur by extending independent review institutions’ remit to include the machinery of automated decisions.

Third, just as social policy and administration scholars gave considerable critical attention to New Public Management’s reconfiguration of social policy thinking and delivery, finding it often resulted in damaging consequences to the most disadvantaged, scholars need to be continually alert to the way in which commercial technology and consultancy interests and propriety software are being inserted into social policy through opaque relationships with social policy agencies (Brown, Reference Brown2021). This is particularly pertinent as the history of private sector involvement in publicly funded services (e.g. public-private-partnerships and outsourcing) is rife with reduced transparency and accountability through commercial-in-confidence provisions.

Fourth, digital technologies increasingly enable personalisation of policy and service delivery based on individual characteristics and circumstances. Such personalisation creates increasingly varied and divergent experiences of social policy. Consider, how your Google search results, purchase recommendations, or social media feeds are shaped by your own histories and profiles. When social policy and administration is like this, it becomes harder to have a collective, shared experience of social policy and its institutions and to appreciate how others experience these. Accordingly, we need to be mindful of the value of universalism alongside personalisationFootnote 6 , lest we splinter ourselves further into a fragmented, individualised society, one in which we live in a Matrix, a system that risks becoming humanity’s enemy.

Digital technology is only going to grow in its centrality to social policy and service delivery. It is well past the time that social policy researchers and advocates critically embrace it.

Acknowledgments

This paper was written as part of the Australian Research Council Centre of Excellence for Automated Decision Making & Society (CE200100005).

Competing interests

The author(s) declare none

Footnotes

1 The Table of articles is available from the author.

2 See also Alston (Reference Alston2019).

3 See, for example, Cook et al.’s (Reference Cook, Given, Keam and Young2019) consideration of smart phone apps that may help parents navigating post-separated parenting and child support agencies.

6 As du Gay (2000) has done with revaluing bureaucracy.

References

Ackland, R. and O’Neil, M. (2011), “Online collective identity: The case of the environmental movement,” Social Networks, 33 (3), 177190.CrossRefGoogle Scholar
Adler, M. and Du Feu, D. (1977), “Technical Solutions to Social Problems?: Some Implications of a Computer-Based Welfare Benefits Information System,” Journal of Social Policy, 6 (4), 431447.CrossRefGoogle Scholar
Adler, M. and Henman, P. (2009), “Justice Beyond the Courts: The Implications of Computerisation for Procedural Justice in Social Security,” in E-Justice: IGI Global, pp. 6586.CrossRefGoogle Scholar
Alston, P. (2019), Digital technology, social protection and human rights (Seventy-fourth session ed.), New York: UN.Google Scholar
Alvarez, R. M. (2016), Computational social science: Cambridge University Press.CrossRefGoogle Scholar
Ballas, D., Clarke, G., Franklin, R. S. and Newing, A. (2018), GIS and the social sciences: Theory and applications: Routledge.Google Scholar
Bekker, S. (2021), “Fundamental rights in digital welfare states: The case of SyRI in the Netherlands,” in Netherlands Yearbook of International Law 2019: Springer, pp. 289307.CrossRefGoogle Scholar
Bellamy, C. and Taylor, J. A. (1998), Governing in the information age, Buckingham: Open University Press.Google Scholar
Benjamin, R. (2019), Race After Technology, Canbridge: Polity.Google Scholar
Berryhill, J., Bourgery, T. and Hanson, A. (2018), “Blockchains unchained: Blockchain technology and its use in the public sector.”Google Scholar
Bovens, M. and Zouridis, S. (2002), “From Street-Level to System-Level Bureaucracies,” Public Administration Review, 62 (2), 174184.CrossRefGoogle Scholar
Boyd, L. H. Jr, Hylton, J. H. and Price, S. V. (1978), “Computers in social work practice,” Social Work, 23 (5), 368371.Google Scholar
Braithwaite, V. (2020), “Beyond the bubble that is Robodebt: How governments that lose integrity threaten democracy,” Australian Journal of Social Issues, 55 (3), 242259.CrossRefGoogle Scholar
Bright, J., Margetts, H., Hale, S. A. and Yasseri, T. (2014), The use of social media for research and analysis: A feasibility study: Department for Work and Pensions.Google Scholar
Brooker, P., Vines, J., Barnett, J., Feltwell, T. and Lawson, S. 2016. Everyday socio-political talk in twitter timelines: a longitudinal approach to social media analytics. In 2016 International Conference on Social Media & Society:# SMSociety.Google Scholar
Brown, D. L. (2021), “Digital government: ideology and new forms of power,” Deakin.Google Scholar
Callon, M. (1986), “The Sociology of an Actor-Network,” in Mapping the Dynamics of Science and Technology:, eds. Callon, M., Law, J. and Rip, A., London: Macmillan, pp. 1934.CrossRefGoogle Scholar
Callon, M. (2001), “Actor network theory,” in International encyclopedia of the social & behavioral sciences, pp. 62-66.CrossRefGoogle Scholar
Carney, T. (2019), “Robo-debt illegality,” Alternative Law Journal, 44 (1), 410.CrossRefGoogle Scholar
Carr, H. and Cowan, D. (2016), “What’s the Use of a Hashtag? A Case Study,” Journal of Law and Society, 43 (3), 416443.CrossRefGoogle Scholar
Castells, M. (1996), The Rise of the Network Society, Oxford: Blackwell.Google Scholar
Castells, M. (2013), Communication power: OUP Oxford.Google Scholar
Cheng, X., Shuai, C., Liu, J., Wang, J., Liu, Y., Li, W. and Shuai, J. (2018), “Topic modelling of ecology, environment and poverty nexus: An integrated framework,” Agriculture, Ecosystems & Environment, 267, 114.CrossRefGoogle Scholar
Cioffi-Revilla, C. (2014), “Introduction to computational social science,” London and Heidelberg: Springer.CrossRefGoogle Scholar
Cook, K., Given, L., Keam, G. and Young, L. J. C. S. P. (2019), “Technological opportunities for procedural justice in welfare administration,” 0261018319860498.CrossRefGoogle Scholar
Dai, X. (2018), “Toward a reputation state: The social credit system project of China,” Available at SSRN 3193577.CrossRefGoogle Scholar
Davis, J. L. (2020), How artifacts afford: MIT Press.CrossRefGoogle Scholar
Davis, J. L. and Chouinard, J. B. (2016), “Theorizing affordances,” Bulletin of Science, Technology and Society, 36 (4), 241248.CrossRefGoogle Scholar
Department of Social Security (1968), Annual Report 1967-68, Canberra: Australian Government Publishing Service.Google Scholar
Desiere, S. and Struyven, L. (2020), “Using artificial intelligence to classify jobseekers: the accuracy-equity trade-off,” Journal of Social Policy.CrossRefGoogle Scholar
Devereaux, Z. P., Cukier, W., Ryan, P. M. and Thomlinson, N. R. 2009. Using the issue crawler to map gun control issue-networks. In APSA 2009 Toronto Meeting Paper.Google Scholar
Du Gay, P. (2000), In praise of bureaucracy: Weber-organization-ethics: Sage.CrossRefGoogle Scholar
Dunleavy, P., Margetts, H., Bastow, S. and Tinkler, J. (2006), Digital era governance, Oxford: Oxford University Press.CrossRefGoogle Scholar
Eubanks, V. (2018), Automating inequality, New York: St. Martin’s Press.Google Scholar
Evenstad, L. 2020. Universal Credit still struggles with digital issues, says NAO. In ComputerWeekly.com.Google Scholar
Fjeld, J., Achten, N., Hilligoss, H., Nagy, A. and Srikumar, M. (2020), “Principled Artificial Intelligence.”Google Scholar
Fuchs, C. (2007), Internet and society: Routledge.CrossRefGoogle Scholar
Gillingham, P. (2017), “Predictive risk modelling to prevent child maltreatment,” Journal of public child welfare, 11 (2), 150165.CrossRefGoogle Scholar
Goldfinch, S. (2007), “Pessimism, computer failure, and information systems development in the public sector,” Public administration review, 67 (5), 917929.CrossRefGoogle Scholar
Goyal, N. and Howlett, M. (2021), ““Measuring the Mix” of Policy Responses to COVID-19: Comparative Policy Analysis Using Topic Modelling,” Journal of Comparative Policy Analysis: Research and Practice, 23 (2), 250261.Google Scholar
Graham, S. and Wood, D. (2003), “Digitizing surveillance: categorization, space, inequality,” Critical social policy, 23 (2), 227248.CrossRefGoogle Scholar
Hannah-Moffat, K. (2019), “Algorithmic risk governance,” Theoretical Criminology, 23 (4), 453470.CrossRefGoogle Scholar
Harding, A. (2017), New frontiers in microsimulation modelling: Routledge.CrossRefGoogle Scholar
Hartmann, K. and Wenzelburger, G. (2021), “Uncertainty, risk and the use of algorithms in policy decisions: a case study on criminal justice in the USA,” Policy Sciences, 119.Google Scholar
Heidenreich, T., Lind, F., Eberl, J.-M. and Boomgaarden, H. G. (2019), “Media framing dynamics of the ‘European refugee crisis’: A comparative topic modelling approach,” Journal of Refugee Studies, 32 (Special_Issue_1), i172i182.CrossRefGoogle Scholar
Henman, P. (1995), “The Role of Computers in Texturing the Micro-Social Environment,” Australian and New Zealand Journal of Sociology, 31 (1), 4963.CrossRefGoogle Scholar
Henman, P. (1996), “Does Computerisation Save Governments Money?,” Information Infrastructure and Policy, 5, 235251.Google Scholar
Henman, P. (1997), “Computer Technology - a political player in social policy processes,” Journal of Social Policy, 26 (3), 323340.CrossRefGoogle Scholar
Henman, P. (2005), “E-government, targeting and data profiling,” Journal of E-government, 2 (1), 7998.CrossRefGoogle Scholar
Henman, P. (2010), Governing Electronically, Basingstoke: Palgrave.CrossRefGoogle Scholar
Henman, P. (2011), “Conditional citizenship? Electronic networks and the new conditionality in public policy,” Policy & Internet, 3 (3), 118.CrossRefGoogle Scholar
Henman, P. (2013), “Governmentalities of Gov 2.0,” Information, Communication & Society, 16, 13971418.CrossRefGoogle Scholar
Henman, P. (2019), “Of algorithms, Apps and advice: digital social policy and service delivery,” Journal of Asian Public Policy, 12 (1), 7189.CrossRefGoogle Scholar
Henman, P. (2020), “Governing by algorithms and algorithmic governmentality,” The Algorithmic Society: Technology, Power, and Knowledge, 2.Google Scholar
Henman, P. (2021), “Administrative justice in a digital world: Challenges and solutions,” in The Oxford Handbook of Administrative Justice, ed. Joe Tomlinson, R. T., Hertogh, Marc, Kirkham, Richard, Oxford: OUP.Google Scholar
Henman, P., Ackland, R., Graham, T. and Ionas, A. 2014. Social policy on the web: The online institutional structure of social policy domains in the UK. In 14th European Conference on eGovernment: Spiru Haret University, Faculty of Legal and Administrative Sciences, Brasov ….Google Scholar
Henman, P. and Foster, M. (forthcoming), “Networked Social Services: Mapping the National Disability Insurance Scheme (NDIS) online network in Queensland.”Google Scholar
Henman, P. and Graham, T. (2018), “Webportal vs google for finding government information on the web,” Information Polity, 23 (4), 361378.CrossRefGoogle Scholar
Henman, P. and Marston, G. (2008), “The Social Division of Welfare Surveillance,” Journal of Social Policy, 37 (2), 187205.CrossRefGoogle Scholar
Herd, P. and Moynihan, D. P. (2019), Administrative burden: Policymaking by other means: Russell Sage Foundation.CrossRefGoogle Scholar
Holbrook, T. (1986), “Computer Technology-1984 and beyond,” J. Soc. & Soc. Welfare, 13, 98.Google Scholar
Høybye-Mortensen, M. (2013), “Decision-making tools and their influence on caseworkers’ room for discretion,” The British Journal of Social Work, 45 (2), 600615.CrossRefGoogle Scholar
Ihde, D. (2012), Technics and praxis (Vol. 24): Springer Science & Business Media.Google Scholar
Iliadis, A. and Russo, F. (2016), “Critical data studies: An introduction,” Big Data & Society, 3 (2), 2053951716674238.CrossRefGoogle Scholar
Ince, J., Rojas, F. and Davis, C. A. (2017), “The social media response to Black Lives Matter: How Twitter users interact with Black Lives Matter through hashtag use,” Ethnic and racial studies, 40 (11), 18141830.CrossRefGoogle Scholar
Karger, H. J. (1986), “The de-skilling of social workers: An examination of the impact of the industrial model of production on the delivery of social services,” J. Soc. & Soc. Welfare, 13, 115.Google Scholar
Kehl, D. L. and Kessler, S. A. (2017), “Algorithms in the criminal justice system: Assessing the use of risk assessments in sentencing.”Google Scholar
Kelly, A. (2021), “A tale of two algorithms: The appeal and repeal of calculated grades systems in England and Ireland in 2020,” British Educational Research Journal.CrossRefGoogle Scholar
Kim, E., Lee, B. and Menon, N. M. (2009), “Social welfare implications of the digital divide,” Government Information Quarterly, 26 (2), 377386.CrossRefGoogle Scholar
Kitchin, R. and Lauriault, T. (2014), “Towards critical data studies: Charting and unpacking data assemblages and their work.”Google Scholar
Latour, B. (1992), “Where Are the Missing Masses? The Sociology of a Few Mundane Artifacts,” in Shaping Technology/Building Society: Studies in sociotechnical change, eds. Bijker, W. E. and Law, J., Cambridge: Mass.: MIT Press, pp. 225258.Google Scholar
Latour, B. (2005), Reassembling the Social, New York: Oxford University Press Google Scholar
Law, J. (ed.) (1991), A Sociology of Monsters, London: Routledge.Google Scholar
Law, J. (1992), “Notes on the Theory of the Actor-Network,” Systems Practice, 5 (4), 379393.CrossRefGoogle Scholar
Lingham, R. and Law, M. (1989), “Using computers for better administration of social services departments,” Computers in Human Services, 5 (1-2), 117132.CrossRefGoogle Scholar
Lipsky, M. (1984), “Bureaucratic Disentitlement in Social Welfare ProgramsSocial Service Review, 58, 327.CrossRefGoogle Scholar
Ludlow, K., Pope, C., Bridges, J., Westbrook, J. and Braithwaite, J. 2019. Policy delusions and dutiful daughters: imagined versus real care integration for older people. In Organizational Behaviour in Health Care Conference.Google Scholar
Lyon, D. (2006), Theorizing surveillance: Routledge.CrossRefGoogle Scholar
Majumder, S., Mondal, T. and Deen, M. J. (2017), “Wearable sensors for remote health monitoring,” Sensors, 17 (1), 130.CrossRefGoogle ScholarPubMed
Mann, M. (2020), “Technological Politics of Automated Welfare Surveillance: Social (and Data) Justice through Critical Qualitative Inquiry,” Global Perspectives, 1 (1).CrossRefGoogle Scholar
Margetts, H. Z. (2011), “Experiments for public management research,” Public Management Review, 13 (2), 189208.CrossRefGoogle Scholar
Mateescu, A. (2021), “Electronic Visit Verification: The Weight of Surveillance and the Fracturing of Care,” Data & Society.CrossRefGoogle Scholar
Matthewman, S. (2011), Technology and social theory: Macmillan International Higher Education.CrossRefGoogle Scholar
Mayer-Schönberger, V. and Cukier, K. (2013), Big data: A revolution that will transform how we live, work, and think: Houghton Mifflin Harcourt.Google Scholar
McNutt, K. and Pal, L. A. (2011), ““Modernizing government”: Mapping global public policy networks,” Governance, 24 (3), 439467.CrossRefGoogle Scholar
Meijer, A. (2007), “Why don’t they listen to us?,” Information Polity: The International Journal of Government & Democracy in the Information Age, 12 (4), 233242.CrossRefGoogle Scholar
Ministry of Pensions and National Insurance (1960), 1959 Report, London: H.M. Stationery Office.Google Scholar
Morison, J. (2016), “Algorithmic Governmentality,” Society for Computers and Law, 15.Google Scholar
Notley, T. and Foth, M. (2008), “Extending Australia’s digital divide policy: an examination of the value of social inclusion and social capital policy frameworks,” Australian Social Policy, 7, 87110.Google Scholar
O’Brien, M. (2020), “Social Investment in Aotearoa/New Zealand: Meaning and Implications,” Social Sciences, 9 (7), 111.CrossRefGoogle Scholar
OECD (2016), Digital Government Strategies for Transforming Public Services in the Welfare Areas, Paris: OECD.Google Scholar
Pasquale, F. (2015), The black box society, Cambridge, MA: Harvard University Press.CrossRefGoogle Scholar
Pollitt, C. (2012), New perspectives on public services: Oxford University Press.CrossRefGoogle Scholar
Prainsack, B. (2020), “The political economy of digital data,” Policy Studies, 41 (5), 439446.CrossRefGoogle Scholar
Rouvroy, A. (2011), “Technology, virtuality and utopia,” in Law, human agency and autonomic computing: Routledge, pp. 135156.Google Scholar
Rouvroy, A. and Stiegler, B. (2016), “The digital regime of truth,” La Deleuziana: Online Journal of Philosophy, 3, 629.Google Scholar
Ruppert, E., Isin, E. and Bigo, D. (2017), “Data politics,” Big Data & Society, 4 (2), 2053951717717749.CrossRefGoogle Scholar
Sandvig, C., Hamilton, K., Karahalios, K. and Langbort, C. (2016), “Automation, algorithms, and politics when the algorithm itself is a racist,” International Journal of Communication, 10, 19.Google Scholar
Schroeder, R. (2018), Social theory after the internet: UCL Press.CrossRefGoogle Scholar
Simonofski, A., Fink, J. and Burnay, C. (2021), “Supporting policy-making with social media and e-participation platforms data: A policy analytics framework,” Government Information Quarterly, 101590.CrossRefGoogle Scholar
Slobogin, C. (2008), “Government data mining and the fourth amendment,” The University of Chicago Law Review, 75 (1), 317341.Google Scholar
Snellen, I. and Wyatt, S. (1992), “Blurred partitions but thicker walls,” Computer Supported Cooperative Work (CSCW), 1 (4), 277293.CrossRefGoogle Scholar
Taulli, T. and Oni, M. (2019), Artificial intelligence basics: Springer.CrossRefGoogle Scholar
Vaithianathan, R., Putnam-Hornstein, E., Jiang, N., Nand, P. and Maloney, T. (2017), “Developing predictive models to support child maltreatment hotline screening decisions: Allegheny County methodology and implementation,” Center for Social data Analytics.Google Scholar
Vandemeulebroucke, T., de Casterlé, B. D. and Gastmans, C. (2018), “How do older adults experience and perceive socially assistive robots in aged care: a systematic review of qualitative evidence,” Aging & mental health, 22 (2), 149167.CrossRefGoogle Scholar
Verbeek, P.-P. (2005), What Things Do, Pennsylvania: The Pennsylvania State University Press.Google Scholar
Washington, A. L. (2018), “How to Argue with an Algorithm: Lessons from the COMPAS-ProPublica Debate,” Colo. Tech. LJ, 17, 131.Google Scholar
Winner, L. (1980), “Do artifacts have politics?,” Daedalus, 121136.Google Scholar
Wise, S. and Craglia, M. (2007), GIS and evidence-based policy making: CRC Press.CrossRefGoogle Scholar
Zouridis, S., Van Eck, M. and Bovens, M. (2020), “Automated discretion,” in Discretion and the quest for controlled freedom: Springer, pp. 313329.CrossRefGoogle Scholar
Zuboff, S. (2019), The age of surveillance capitalism: Profile books.Google Scholar
Figure 0

FIGURE 1. Publications published with ‘digital’, ‘electronic’, ‘automation’, ‘ICT’, or ‘computer’ in title, abstract or keywords, by decade