ADM Institutionalised: Public Sector Governance

ADM Nordic perspectives: 3rd Workshop


Over the last few years, we’ve seen much piloting of ADM and predictive systems in the public sector, and related movement in the notions of how to govern these new initiatives. For the Nordics, in particular, there seems to be a formative period with regards to how ADM will be institutionalised, operationalised and governed. Drawing from two former workshops, we therefore address in the coming meeting institutional perspectives and normative responses to ADM in the public sector. We are interested in which aspects of ADM in public agencies are moving from piloting to implementation on a larger scale. For example, what are the core ideas or values that become embodied in this type of institutionalisation? From an empirical perspective, what values and insights do contextualised studies of the situated uses of ADM bring to inform governance? From a governance perspective – ranging from ethics guidelines to administrative law and the European proposal for an AI Act – what implications, needs, tensions and conflicts can be traced?

We ask these questions, because in building bridges between governance and the practical realities of people implicated by ADM in organisational and everyday life, the possible tensions and problems need to be taken into account. Think of informed consent and transparency, for instance, in light of the practical needs and experiences of citizens in relation to ADM. The overall purpose of this workshop is to deepen both empirical and theoretical insights on the institutionalisation of ADM systems in the Nordic region and query what notions that govern them.

The rationale for asking the following questions, we argue, is that detailed knowledge is quintessential if we want to steer ADM onto a path that sustains the high level of trust that citizens in the Nordic countries have in relation to governing bodies.

   

 

  • What imaginaries are part of the institutionalisation of ADM, and how are these played out in different domains, such as healthcare, the education sector or welfare?
  • What core notions of ADM or AI are solidified or juridified in guidelines, institutional practices or law?
  • How are different levels interacting or clashing, for example the European, the national, the regional and the local?
  • How are citizens involved, or not, in the institutionalisation of ADM practices?
  • What notions or practices of ADM and AI move beyond piloting to become organisationally, regionally or nationally implemented?
  • How does ADM and its governance relate to a spatial dimension, for example in public spaces?
  • What happens to the role of the handling officers, case workers or other professionals as their decision-making increasingly depends on automated recommendations and computational predictions? How do they understand, work with or mitigate the relationship?
  • How do traditional regulations on autonomy and involvement for the human/patient/ consumer translate to the use of ADM and AI?
  • What does the institutionalisation of ADM mean in terms of private/public relationships? Are they becoming more complex and opaque for the everyday citizen?

 

 

9.45-10.15

Arrival and Coffee. Everyone mingles, online and on zoom!
10.15-10.30
Welcome and opening remarks: Framing and Reflecting. Presented by Stefan Larsson, Stine Lomborg and Minna Ruckenstein.

10.30-12.30 Session 1: Governance. Chaired by Stefan Larsson.
12.30-13.45
Lunch. Lunch will be provided, and will be at Moroten och Piskan (LTH).

13.45- 16 Session 4: AI/ADM Discourses. Chaired by Stine Lomborg.
18.00- wherever the night takes you: Dinner. 
Place: Mat & Destillat, Kyrkogatan 17, Lund

Session 1

Paper 1: Conflict lines and politization dynamics in the Special Committee on
Artificial Intelligence in a Digital Age – between consensus and dissensus

Frans af Malmborg

Abstract

In broad sweeping statements there has been an international proliferation of artificial intelligence (AI) soft policy by international organisations, governments, and nongovernmental organisations. Initial reports state that such AI policies converge among themes such as AI ethics and trustworthiness. AI is often framed in techno-optimistic terms saying that it will, through its disruptive effects contribute to ecological sustainability, economic growth, social equality and administrative efficiency. However, AI systems are highly complex general-purpose technologies with cross-cutting challenges and opportunities for society. Within other traditional policy areas such as taxes or social welfare, lines are often already
drawn in the sand based on long-standing ideological differences between parties. This article aims to examine the dynamics of the potential political conflict lines within the novel AI act. Looking at the first ever example of a law specifically on AI, the article aims to probe the AIDA committee of the European Parliament which are currently processing the European Commission’s regulator proposal, which was the result of the Commission’s AI policy process 2018-2021. Drawing on cleavage theory, the article aims to utilize semi-structured interviews, along with meeting minutes and video recordings of the committee to investigate how politicization and political issues around AI is formed, upheld and sustained.

Paper 2: Official accountability and automated decision-making – Legislative
approach in Finland

Hanne Hirvonen

Keywords: automated decision-making, algorithmic decision-making, administrative decision,
official accountability, administrative law, law and digitalization.

Abstract

Call for accountability as a legal control mechanism over the use of automated decision-making (ADM) in public sector can be seen both in growing body of academic literature and in different type of policy papers. However, ADM is a rather different way to exercise public power compared to decision-making done by individual officers. As follows, it is still quite unclear what official accountability actually legally means in the context of ADM. In my presentation, I first describe the on-going Finnish legislative process where the aim is to regulate i.e. the official accountability for automated decisions. Then, I evaluate this process in the light of the problem-finding framework that Hin-Yan Liu and Matthijs Maas have developed to ground long-term governance strategies for AI. Even though there is a sense of urgency, it would be important to discover and define the problems thoroughly. Yet, it seems that the legislator is currently acting on the level of “problem-solving”, which do not necessarily provide the most long lasting regulatory solutions.

Paper 3: Patient Rights in the Light of AI Reconsidering transparency as situated
principle in public healthcare

Charlotte Högberg and Stefan Larsson

Abstract

The automation, scale and data dependency of AI-driven decision-making and decision-support in healthcare calls for a reassessment of principal ethical and legal norms of transparency, in the light of these novel methodologies. By an analysis of principal and normative legal frameworks of patients’ rights relating to transparency and explainability – e.g., right to information, autonomy and privacy – within Sweden and the EU, we outline main challenges in the implementation of AI in public healthcare. We argue that transparency needs to be considered as situated within the information practices of healthcare. Additionally, contextual transparency and explainability of AI-systems and methodologies is necessary to adhere to basic principles of normative and legal frameworks of Swedish healthcare and for assessing if the best possible care is given to the one most in need. Thus, we argue that there is an interdependency between healthcare quality and AI-transparency.

Paper 4: Calculability or Predictability? The Status of Legal Certainty in
Automated Decision-Making in Welfare Services

Vanja Carlsson

Abstract

In the context of automated decision-making (ADM) in the public sector, the rule of law and its inherent principle of legal certainty are highly debated concepts, covering desirable values of equal treatment, transparency, and impartiality. However, scholars disagree on whether ADM are beneficial for the rule of law and legally certain procedures. The debate points to an ambiguity embedded in the very substantive meaning of the concepts of the rule of law and legal certainty. This article analyses how the legal certainty principle is interpreted and promoted in the practical implementation of ADM in welfare services and how legal certainty is negotiated in relation to other public principles, such as efficiency. The empirical case is the Swedish Public Employment Service, which has implemented a statistical ADM tool for decisions on support to job seekers. The results show that the promotion of ADM indicates a shift in the understanding of legal certainty.

Session 2

Paper 1: Things of AI Ethics: Algorithm and AI Register in Amsterdam and Helsinki

Sonja Trifuljesko

Abstract

In September 2020, the Algorithm and the AI Register were launched in the Cities of Amsterdam and Helsinki respectively. The Register, which is a digital tool aiming to document specificities of algorithmic applications used within an organization, was generally positively received, but it also faced some criticism. Both proponents and opponents of the Register, however, approach it merely as an inanimate object. Inspired by Bruno Latour’s call for Dingpolitik (2005), in this paper I attempt to turn the Register into a lively “thing”. Drawing on ethnographic material collected in 2019 and 2020 in the Finnish Capital Region and online, I ask whose concerns the Algorithm and AI Register gather? What are their concerns about? And how are those concerns represented through the Register? I argue that looking closely at the “things of AI ethics” opens up novel ways to explore normative responses to ADM in the public sector.

Paper 2: Algorithmic Implicature

Sille Obelitz Søe

Abstract

In this talk I will outline the idea behind the concept algorithmic implicature – currently a metaphor for trying to understand that which is present without being explicit in the automated domain – i.e. implicit attitudes, intentions, assumptions, etc. The idea is to develop algorithmic implicature as a full-fledged concept based on investigations of absence and its relation to meaning, representation, and information, as well as investigations of absence-based inference. The focus on absence enables us to address ‘that which is present without being present’ in algorithms in ways that move beyond current ethical concerns regarding discrimination, bias, and unfairness. The concept of absence has received plenty of attention within various strands of philosophy and is not in itself unacknowledged. Absence is absence. Its reality lies in its non-existence. It is neither an entity, an object, a state of affairs, nor a negative fact (Kukso, 2006). We have trouble defining it, yet humans are perfectly able to function and communicate even in the absence of data, information, or knowledge. Further, we are capable of drawing absencebased inferences (Kallestrup & Pedersen, 2013) – i.e. we can reason in counterfactual terms inferring that ~p from the absence of evidence that p given that we are receiving reliable information about the domain (Pedersen & Kallestrup, 2013; Sober, 2009). However, the modelling of human decision-making for purposes of automation tends to focus on the information at hand, the present, the positive input and not on the information that is missing, the absent. This is the case with some of the newest advancements in AI regarding social intelligence (Bolander, 2019a) and the capability for judgement (Smith, 2019). Hopefully, the investigations of absence in decision-making processes will further our understanding of artificial intelligence and be especially beneficial in contexts of automated decision-making.

Paper 3: AI in Future Governance: Hype and Irony

Santeri Räisänen

Abstract

In public governance, Artificial Intelligence has been made in to a cultural icon. Not an artefact, but rather a narrative construct that marshals allies and resources, often by way of excessive and fantastic expectations. Innovation scholars have conceptualised this as hype, a mis-match between expectations and reality. Hype is generally understood either as a macro-sociological predictor (eg. the Gartner Hype Cycle), or as a technology competence issue, one to be overcome through educational efforts. Following the actors of a governmental artificial intelligence programme reveals a different reality, though. What is often considered hype, ie. an issue of competence, can rather be understood as a collective, ironic performance. A performance which assembles actors and centres them in sociotechnical projects. In the Finnish Artificial Intelligence Programme, rather than identifying any empirical phenomena, the concept of hype is taken up as a discursive resource to demarcate the expertise of technologists over the non-technologists. The non-technologists, though, approach AI primarily as a joke, implying that the concept of hype is not all that it is hyped up to be.

Paper 4: Economization of data relations

Tuukka Lehtiniemi

Abstract

A recent bout of regulatory proposals from the European Commission outlines “the EU as a role model for a society empowered by data”. Most recently, the Data Act stipulates harmonized standards to enable fluid data flows. Also various technology initiatives propagate ideas about data flows that could be straight from the EC playbook. One example is Solid, a technology project based on Tim Berners-Lee’s vision about the future web. Regulatory proposals and technology projects aim to fix the digital environment, but I will present them as examples of something else: the economization of data relations. This
analytical frame of economization (Çalışkan & Callon, 2009) allows viewing them as projects to make data relations more economic, based on naturalized assumptions about what is economic (Elder-Vass, 2016). While economization of data relations is likely unavoidable, I will suggest that broader notions about the economy could form a basis for more promising fixes of the digital environment.

 

 

09-9.30 Arrival and Coffee. Quick catch-up over coffee before it begins again.
09.30-11.45 Session 3: Human-ADM Relationships. Chaired by Stefan Larsson.
11.45-13 Lunch: Lunch will be provided, and will be at Moroten och Piskan (LTH).
13- 14.30 Session 4: Role of Law. Chaired by Minna Ruckenstein.
14.30-15.30 Closing the workshop: Discussing and Reflecting. Chaired by Stefan Larsson, Stine Lomborg and Minna Ruckenstein.

Session 3

Paper 1: Human-algorithm collaboration supporting “smaller” languages

Silja Vase

Abstract

Human-algorithm collaboration is a growing phenomenon of interest in healthcare as the use of technologies using machine learning in clinical environments becomes more common. I will illustrate the emergence and development of a human-algorithm collaboration configuring automation of decision-support at Danish hospitals by drawing on a field study of physicians' use of automatic speech recognition (ASR). Hospitals in Denmark recently started to implement ASR based on machine learning to reduce physicians' spoken repetitions and improve the efficiency of records. However, the system relies on the individual physician's time spent training the algorithm to "make sense" of what is said. This human-algorithm collaboration supports more minor languages, such as the Nordic. Despite an expectancy for algorithms to become less co-dependent, I demonstrate how the data density supporting smaller language resources and the collecting of data challenges such hopes.

Paper 2: Public Attitudes to Digital Health Repositories: Insights from a Crosssectional
Survey

Giovanna Vilaza

Abstract

Driven by the possibility of improving and personalizing medicine, digital datasharing initiatives within the public healthcare sector have been increasingly adopted, especially in Nordic countries. This presentation will summarize the empirical insights of an extensive online survey (n=1600) conducted with residents in Brazil and Denmark. The published study focused on those contributing to personal health data by examining their willingness to share different data types and the prevalence of reasons for concern: essential factors for an ethical search for broader acceptance. Besides reporting on results, the presentation will briefly discuss research implications, including the importance of institutions being more transparent about the goals and beneficiaries of research projects using and re-using data from public repositories. Providing greater autonomy and data control is another promising direction for health data interfaces if they are to be attuned to citizens' preferences.

Paper 3: Platform’s governance paradox: Exploring algorithmic control through
shadow banning folklore

Laura Savolainen

Abstract

Social media platforms take part in processes of social ordering, setting policies on important matters such as public speech or privacy – sometimes in ways that defy decisions by governments. Thus, although my presentation does not focus on the public sector, I think it can be a useful addition to the discussion on the institutionalization of ADM systems in the Nordics. In the presentation, I study the shifting grounds and emerging logics of (algorithmic) platform governance by exploring social media users’ beliefs and experiences of shadow banning: a contested form of content moderation where the visibility of posts is suppressed without notification. Based on my analysis, I argue that at the heart of platform governance lies a potentially unresolvable paradox: while platforms are more often taking the stage as legitimate governors, they are also simultaneously proactively developing their moderation practices to a direction that is clearly not in line with rule-of-law values such as consensuality, relative stability, and fair enforcement.

Paper 4: Disconnecting the communicating body

Sne Scott Hansen

Abstract

Research on digital disconnection has mostly centred on the mental and psychological effects of the over-digitalization of everyday life (Moe & Madsen, 2021), leaving the physical and material aspects of the emerging use of digital technologies in relation to the body, insufficiently researched. However, the very act of connecting or disconnecting rests on either physical collaboration or decoupling between a body, a technology, and the data they share. This paper analyses experiences of the relationship between the latter in the context of self-tracking, based on empirical fieldwork in Denmark and Sweden. The empirical material consists of qualitative interviews with platform workers engaging in coercive tracking in their job as couriers, and self-trackers engaging in voluntary tracking first and foremost for exercise. With this empirical material, I contrast the ways platform workers alternate between being distinctively connected or disconnected from coercive tracking of their bodily movements at work, with the ways leisurely self-trackers voluntarily engage in more permanent relationships with devices in their private life. I suggest the latter are in a state of being more or less constantly plugged to their device. The findings suggest that while platform workers engage in coercive tracking, where the body communicates its movements to an algorithm, disconnection is a common and frequent practice for these types of workers and their smartphone use. Self-trackers, on the other hand, develop more intimate relationships with various devices in order to continuously collect information about the body as a constant source of information. For voluntary self-trackers, therefore, disconnection becomes both a less visible and less valuable act. This indicates that disconnection relates to the context of tracking and the proximity between the device, the body, and the digital tracks it leaves behind.

Session 4

Paper 1: ADM systems, systematic errors and the rise of algorithmic injustice

Charlotta Kronblad

Abstract

Algorithmic decision making (ADM) systems are becoming increasingly common within the public sector. ADM efforts are often applauded and appreciated while negative consequences are poorly understood. In this paper we aim to enlighten such consequences. Particularly, we look at situations where public sector ADM has failed and ask: when algorithms do wrong, will public institutions put things right? By exploring a case where the school administration in Gothenburg implemented an ADM system for school placements, we show that ADM systems come with the risk of new types of systematic errors. These errors stem from the use, and misuse, of ADM processes and are often hidden in code. Using an activistic research approach, suing the administration, we conclude that neither the administration, nor the administrative court, understand, or account for these errors, resulting in a lack of responsibility for correcting them. This leads to algorithmic injustice and decreases trust in public institutions while challenging social-, administrative-, and legal justice.

Paper 2: Public Sector AI Governance: Surveying the Discrimination Awareness in Swedish Public AI Development

Stefan Larsson and Charlotte Högberg

Abstract

The aim of this article is to increase the conceptual and empirical understanding of governance of public sector AI and ADM development, primarily regarding antidiscrimination. This includes the step from high-level principled notions on transparency and fairness to AI and ADM development into specific national level-authorities. Under this aim, we use Sweden as a case and empirically study, i) to what extent national Swedish authorities are developing or utilizing ADM or AI-systems, and if they do, ii) to what extent and in what aspects are they taking anti-discrimination into account in the form it is regulated in Sweden. Lastly, we seek to contribute to the space between the principled accounts on fairness and transparency and the regulated notions of anti-discrimination, in iii) how the principled notions of AI governance can be related to regulation on anti-discrimination when it comes to public sector AI development. For the descriptive analysis, we utilize data from a survey that the Swedish Equality Ombudsman (Diskrimineringsombudsmannen, DO) conducted in late 2021 and early 2022, including 34 national public authorities.

Paper 3: Imaginaries of a better administration: renegotiating the relationship
between digital public power and citizens

Terhi Esko and Riikka Koulu

Abstract

The article investigates how the relationship between public power and citizens is renegotiated when developing regulation on the use of automated decision-making. By conducting a case study and using the concept of sociotechnical imaginaries we address a law reform process at its initial stages. The article argues that current efforts to regulate the use of automated decision-making in public authorities do not acknowledge the diversity of citizens in decision-making regarding them. Rather the “average citizen” is created as a proxy for public authorities failing to account for the diversity in people’s skills, competences, and position as subjects of law. Four imaginaries of future administration are presented: understandable administration, self-monitoring administration, adaptive administration, and responsible administration. These imaginaries accommodate expectations of what is considered good administration and what is the role of citizen and technology in it.

 

    

Call for papers

If you wish to present your work, please send a title and abstract (150 words).

Deadline: 16 March.

To Laetitia (Tish) Tanqueray: laetitia.tanqueray@lth.lu.se 

For those who just want to attend – let us know.


How

While we pursue a physical event, where we meet up in Lund to present and discuss, there will be an opportunity to join also online. Given the experience from the workshop in Copenhagen, we will have at the most 4 sessions containing around 3-4 presentations. For network members travel and accommodation can be covered.

The workshop will have room for max 30 attendants, and presenters will be prioritised.

Read as pdf