Everyday ADM: Maintaining Autonomy, Equity and Pursuits of Welfare

ADM Nordic perspectives: 2nd Workshop, Copenhagen 27-28 January 2022.

Over the last few years, critical social science research has established that processes of datafication, data harvesting and digital tracking, in particular, pose a general societal challenge that risks undermining Nordic values of autonomy and equity and the overall welfare of people. Researchers have flagged datafication as a specific concern for the public sector (e.g. Dencik & Kaun, 2020) in relation to questions of automated decision-making systems (ADM), and other forms of data-driven optimization. Despite burgeoning literature on various concerns and the ethical guidelines and regulatory initiatives that try to respond to them, however, we still know little about the experiential dynamics at play in concrete contexts of ADM.

 The purpose of this second workshop arranged by the ADM Nordic network is to deepen empirical insights on ADM systems in the Nordic region and query their impacts on the lives of citizens. We aim to address everyday responses to ADM by focusing on how people from all walks of life promote, respond to, act with and seek to circumvent or avoid ADM systems: what are the imagined, perceived and felt impacts of automated decision-making? What are the commonalities and differences between everyday responses to ADM across personal and organizational domains? What characterizes Nordic employees’ and citizens’ values, capacity for and practices of agency in relation to ADM systems? And what variations and tensions in responses to ADM can be discerned across the Nordic countries?

The rationale for asking such questions is that detailed knowledge is quintessential if we want to steer ADM onto a path that sustains the high level of trust that citizens in the Nordic countries have in relation to governing bodies. Moreover, in order for people to live well with ADM systems in the region, these systems need to maintain societal aims of autonomy and equity. We contend, that while agency and resistance may be limited, it is nonetheless documented in people’s responses and resistance to ADM systems and should inform a general pursuit of citizen-friendly ADM (e.g., Lomborg, Thylstrup et al. 2018, Ruckenstein and Granroth 2019). It is through empirical work on the relationship between specific technological services and infrastructures and people’s expectations, responses and actions that systemic powers and biases can be untangled and adequately addressed.

This workshop builds on the discussions from the first ADM Nordic network workshop that outlined a focus on rehumanizing ADM (see report: LINK). Through a tour de force of different cases of ADM in the Nordics, we argued for the need to establish “the human as a critical and creative agent in human-machine relationships that are emerging in the wake of recent ADM technologies and related discourses” and further suggested to focus our analytical lens on what ADM does, rather than what ADM is, to offer a more situated, dynamic and processual view to ADM systems. Hence, we invite presentations that may fuel such an agenda.

Preliminary workshop programme

Practical information

The workshop will be held in a hybrid format.

Location University of Copenhagen, South Campus, room TBC

Zoom: link can be provided upon request to the organizers.

For presenters

Please prepare a presentation lasting 15-20 minutes. Presentations will be followed by discussions in small groups (if possible) and a joint Q&A. If you have a full paper to share, please send it in advance to Stine Lomborg: slomborg@hum.ku.dk. We will then share it with the workshop participants in advance of the workshop.

 

10:30  Arrival and coffee

10:30-12:30  Welcome and session 1: Setting the scene of ADM research: From concepts and cases to experiencing everyday ADM

Chair: Stine Lomborg

Conceptual Boundaries: Defining and Undefining Automated Decision-Making

Stine Lomborg (University of Copenhagen), Anne Kaun (Södertörn University) & Sne Scott Hansen (University of Copenhagen)

In recent years the notion of automated decision-making has experienced an upswing in social science and humanities oriented research. Partly promising a more focused and nuanced way to approach the latest technological development in automation including artificial intelligence and machine learning, the term is now being broadly used and explored across disciplines. This article traces the emergence and evolution of automated decision-making research across fields identifying central concerns and methods while outlining a stable baseline for future research. Based on a systematic mapping of publications and a network analysis of central publications, we outline the contours of ADM as an area of research and an emerging empirical phenomenon. And we suggests trajectories for continued empirical and conceptual work.

Algorithms at work in Denmark

Rikke Frank Jørgensen (Danish Institute of Human Rights)

In Denmark, the public administration relies heavily on the processing of vast quantities of data about the individual and increasingly uses predictive analytics to identify specific areas of intervention, such as fraud or vulnerability, as part of its decision-making processes. The logic of data analytics and predictions go well in hand with digitalisation policies focused on public sector efficiency but risk undermining citizens right to transparency, privacy, and non-discrimination. Moreover, it shifts the power balance between citizens and public authorities by making citizens ever more transparent, and the decision-making processes still more difficult to challenge.

In the talk, I will present findings from a recent study on automated decision-making in the public sector in Denmark. I will address questions such as: In which ways are citizens right to privacy, non-discrimination, and transparency challenged by automated decision-making? How can case workers mitigate those risks? and do we need new regulation to ensure that citizens agency and trust are sustained in the digital welfare state?

Autonomy as a lens for studying human agencies in relation to algorithms

Laura Savolainen and Minna Ruckenstein (University of Helsinki)

In the Nordic countries, autonomy is a core value that defines the quality of everyday life. Not surprisingly, then, autonomy is also at stake when people evaluate human agencies in relation to algorithmic technologies. In terms of agentic qualities, the current scholarly discussion on algorithmic systems has a tendency to emphasize control on the one hand and resistance on the other. This dichotomy is, however, likely to limit our understanding of human agency in relation to algorithmic systems. In order to offer ways forward in the ongoing debate, we develop autonomy as an analytic lens by suggesting two interrelated horizons of agentic engagements with algorithmic developments: the intimate, and the instrumental. We demonstrate how a focus on autonomy promotes an open-ended and context-sensitive exploration of user agency, while also offering analytic clarity and differentiation that stays committed to a critical stance towards human-algorithm relations and the power imbalances they suggest.

LUNCH

13:30-15:30  Session 2: Data friction, infrastructural power and agency

Chair: Stefan Larsson

Untangling automated decision-making in breast cancer screening: Breast radiologists’ views on the everyday use of artificial intelligence

Charlotte Högberg, Stefan Larsson and Kristina Lång (Lund University)

We want to understand how professionals in the public health care sector navigate the automation of decision-making/decision-support. In the development of AI for mammography screening, the hopes are to increase cancer detection, decrease false positives and ease the screen-reading work load for radiologists. A survey on Swedish breast radiologists shows that a majority are positive towards the use of AI in mammography screening, but also that there are frictions and uncertainties. Very few think that the screen-reading should be fully automated. At the current state of development, most would like to use AI as a replacement of one of two human readers or as an addition to standard double-reading. While most would, to quite a high degree, trust classifications by AI-systems, many are uncertain about the trustworthiness of AI. Most consider that information regarding training data, development and visual explanations of the AI-systems would be helpful in their trust assessment, whereas not as many would be helped by access to the algorithms.

Datafied Schools: The hidden commodification of public primary education in Denmark

Victoria Andelsman Alvarez, Sofie Flensburg and Signe Sophus Lai (University of Copenhagen)

Amid the increasing reliance on digital tools and services in education, this article examines the datafication and commodification of public education in Denmark. We analyse the web and app (iOS and Android) versions of 45 tools and services that are highly used in public primary schools, the market actors harvesting and distributing user data, as well as the types of data they generate. The analysis finds that the digital tools and services collect significant amounts of user data, use it for functional as well as commercial purposes, and distribute it to a long list of third-party services. In light of these findings, we reflect on how the datafied, and commodified Danish school life challenges established welfare state ideals surrounding public schooling, raise challenges for schools and teachers alike, and create new inequalities amongst students.

Friction in data labour – prisoners training artificial intelligence

Minna Ruckenstein and Tuukka Lehtiniemi (University of Helsinki)

In this paper, we use the notion of friction (Tsing 2005) to examine human data labour that keeps AI-based automation running. We discuss an unconventional case of data labour: Finnish prisoners producing training data for a local artificial intelligence company. At first glance, prison data labour is ‘ghost work’ - now a recognized form of low paid click work. In light of friction, however, we are dealing with the local and situational variations of data labour: how high-tech development can be married with humane penal policies and rehabilitative aspirations. We argue that the notion of friction aids in holding together seemingly contradictory value aims and opening novel ways of exploring processes of automation. By acknowledging what is of value to the different parties involved, we can begin to see alternative paths forward in the study of automation. 

Relationality of platform infrastructures

Kirsikka Grön, University of Helsinki

Platform infrastructures and algorithmic infrastructures have become catchy terms to explain how ADM and other data-driven systems are becoming a crucial part of everyday. However, often these discussions tend to overestimate infrastructural power and disregard people’s agency to resist it. Drawing on research on living with data and algorithms and Star and Ruhleder’s (1999) work on infrastructures, we pay attention to the relationality of the infrastructures. By analysing group interviews conducted in Hangzhou, China, we show how the participants related platform infrastructures’ daily effects to socioeconomic status, occupation, and age, emphasizing their relational nature even in an authoritarian environment. We situate our results as part of people’s globally felt struggles in datafied societies and suggest that further explorations on the relationality of infrastructures in Nordic contexts hold potential for thinking about how to nurture the values of autonomy, equity, and welfare in the algorithmic age.

DINNER?

 

 

Arrival and coffee

09:00-11:00  Session 3: Opening black boxes

Chair: Minna Ruckenstein

Auditing Risk Prediction of Long-Term Unemployment 

Cathrine Seidelin (University of Copenhagen)

As more and more governments adopt algorithms to support bureaucratic decision-making processes, it becomes urgent to address issues of responsible use and accountability. We examine a contested public service algorithm used in Danish job placement for assessing an individual’s risk of long-term unemployment. The study takes inspiration from cooperative audits and was carried out in dialogue with the Danish unemployment services agency. Our audit investigated the practical implementation of algorithms. We find (1) a divergence between the formal documentation and the model tuning code, (2) that the algorithmic model relies on subjectivity, namely the variable which focus on the individual’s self-assessment of how long it will take before they get a job, (3) that the algorithm uses the variable “origin” to determine its predictions, and (4) that the documentation neglects to consider the implications of using variables indicating personal characteristics when predicting employment outcomes. We discuss the benefits and limitations of cooperative audits in a public sector context. We specifically focus on the importance of collaboration across different public actors when investigating the use of algorithms in the algorithmic society.

Omaolo as an ADM Infrastructure

Sonja Trifuljesko (University of Helsinki)

In this paper, I focus on the digital channel for social and health services in Finland called Omaolo (Eng. MyFeel), of which the coronavirus symptom checker is a part. More precisely, I examine how Omaolo was assembled in the first place. To do that, I go back to the period between 2016 and 2018 and zoom in on a national flagship project titled “Self-care and digital value services”, which gave rise to Omaolo. Drawing on 55 posts published on the project’s blog, Omaolo guidebook, four promotional and two demonstration videos produced at the end of the project, as well as a half-day-long round-up seminar, I pinpoint some of Omaolo’s defining characteristics. In doing so, I aim to identify traits that could have contributed to making Omaolo a well-functioning infrastructure in utterly dysfunctional times and, thus, foreground certain features of a Nordic ADM system that we might consider worth cherishing.

MAGIC in the Blackbox – An attempt to demystify Neural Networks used for Breast Cancer Diagnostics

Julian Rosenberger and Dorthe Brogård Kristensen (University of Southern Denmark)

Recent progresses in machine learning (ML) resulted in a new boom of artificial intelligence (AI) applications that claim to offer enormous advantages to a diversity of fields (Gunning et al. 2019). In Healthcare the usage of several AI techniques, such as classical support vector machines and neural networks, as well as state-of-the-art deep learning models can already be found. Although there are still prejudices, there is consensus that AI can definitely assist human doctors to make better clinical decisions and maybe even replace physicians in certain healthcare areas (e.g. radiology) (Jiang et al. 2017). However, many of these models cannot explain their autonomous decisions and therefore lack of transparency and trustworthiness. Whereas for certain AI applications those explanations may not even be needed, they are crucial for medicine (Gunning et al. 2019). Thankfully, with the recent progress in deep learning (DL) another research became of great interest - Explainable Artificial Intelligence (XAI). XAI aims at making decisions made by DLbased models more transparent and therefore more reliable (Arrieta et al. 2020). This paper examines the need for XAI in the context of the MAGIC project. MAGIC is a project organised by the Department of Radiology at Odense University Hospital, in which a novel DL model for mammography screenings, called Transpara, is tested extensively with the goal to replace one of the two radiologists as reviewer. With insights from one of the project leads, a radiologist with more than 20 years of work experience and technicians from the algorithm providing company as well as looking at state-of-the art XAI methods, this paper aims at demystifying the tested software and providing a glimpse into the blackbox, also for non-computer-scientists.

Top-down and bottom-up AI imaginaries in social care

Tuukka Lehtiniemi (University of Helsinki)

This presentation examines an AI tool for risk prediction in social care. The tool identifies individuals at risk and raises notifications to caseworkers. Two imaginaries are readily available to make sense of the case. A solutionist imaginary treats AI as a tool transferable between application fields: if AI enables predictive policing and credit scoring, it can identify children at risk of custody. A reactive imaginary flags AI tools as dubious to begin with: an expanding critical literature has identified problematic practices, arbitrary outcomes and reinforced inequalities. We argue that both solutionist and reactive imaginaries are external to Finnish social care, and as such, both are top-down in nature. They offer rallying cries but fail to provide contextual guidance. Based on the case, we offer an empirically robust way to explore and practice AI imaginaries that are compatible with social care. A bottom-up approach reveals how AI tools can either support or undermine care practices, depending on the arrangements they are embedded in.

12:30-14:30  Session 4: Future pursuits with ADM

Chair: Stine Lomborg

Bubbling up/beyond digital participatory budgeting 

Yu-Shan Tsenga, Christoph Beckerb, Ida Roikonena, aCentre for Consumer Society Research, University of Helsinki; bInstitute for Biodiversity and Ecosystem Dynamics and Institute for Advanced Study, University of Amsterdam

In this paper, we seek to uncover how the OmaStadi (Our City), a Finnish platform for participatory budgeting, reshapes the political importance of urban issues. We adopt a hybrid methodology of data analysis and ethnographies to identify ways in which urban issues become trending and with what logic. We develop a novel analytical lens, bubbling, to theorise the process in which the numeric and binary logic of the OmaStadi platform determinates the thematic, temporal and geographical enclosures of urban issues. We reveal that these trending urban issues are not only bubbling up in specific time, but also skewed towards small-scale, non-controversial matters such as sport facilities and trees. The emergence of bubbling up falls into a local trap of urban democracy. Going beyond this technical-deterministic logic, we argue for the need to consider everyday struggles as a starting point to foreground the democratic possibilities for the OmaStadiplatform. 

Data Futuring in Chronic Self-Care

Tariq Osman Andersen (University of Copenhagen), Stina Matthiesen (University of Copenhagen) and Jonas Fritsch (IT University of Denmark)  

The increased uptake of consumer self-tracking devices in chronic care contexts and the opportunities demonstrated with artificial intelligence in healthcare is changing the scope of personal informatics. Emerging studies are reporting on future forecasting of health outcomes, suggesting a novel class of self-care technologies. We conducted an exploratory interview study with five chronic heart patients to explore the implications of short-term predictions of heart arrhythmia. We present seven dimensions of what we term ‘data futuring’ and describe the ways in which patients engage in future-oriented data work by attending to data from their Fitbit device and user interface mock-ups of mobile app features that show 1-30 days prognostic risk of upcoming heart arrhythmia. 

Guaranteed outcomes or new uncertainties? Behavioural data in life insurance  

Maiju Tanninen (University of Helsinki and Tampere University) 

This paper examines how Finnish life insurers experiment with behavioural data, generated by policyholders’ self-tracking practices. In industry visions, and in some of their critiques, behavioural data are believed to enable more precise risk calculations and a more proactive relation to risk. Through the logic of a ‘feedback loop’, policyholders are ‘nudged’ towards healthier habits; this is supposed to enhance the predictability of risk, creating ‘guaranteed outcomes’ (Zuboff 2019). However, these accounts ignore that the ‘lively’ digital data (Lupton 2016) are very different from the insulated data sets that insurance companies are familiar with; thus, their implementation might not be straightforward. Based on 16 expert interviews, I examine the uncertainties related to the use of behavioural data in insurance. Firstly, I discuss the tension between the need to ensure trustworthy customer relations and the future-orientation of the operations. Secondly, I analyse the insurance professionals’ efforts to create a ‘feedback loop’ and discuss their struggles to form distinct connections between behaviour, data, wellbeing and risk.

Dreaming up the cyborg-citizen-consumer

Santeri Räisänen (University of Helsinki)

The Finnish National Artificial Intelligence Programme AuroraAI has since 2017 ventured to implement an AI transformation towards human-centric society in the Finnish public sector. Attempting to subvert the wicked problems weighing down the nordic welfare state, namely an ageing population and a crises in public finances, the human-centric society in the Aurora programme is founded on a vision of new kind of subject: the cyborg-citizen-consumer. By augmenting citizens' service consumption by way of personalised algorithmic welfare recommendations, citizens are directed towards the self-responsible, economically prudential and therapeutic in their consumption behaviour. Drawing from design documents and interviews with planners in the programme, I analyse the ethico-political features of the cyborg-citizen-consumer as it is imagined up in the making of AuroraAI. The analysis reveals algorithmic recommendation as a site for neoliberal governmentality, and the bourgeois fantasy of future welfare underpinning the sociotechnical programme: that of an inadequately self-responsible people augmented by AI.

14:45-15:15  Closing panel: Cross-cutting themes and ways forward

Panelists: Minna Ruckenstein, Stefan Larsson & Stine Lomborg

Thank you and see you in-person next time in Lund 6-7 April 2022.