Back to search

TEKNOKONVERGENS-Teknologikonvergens - grensesprengende forskning og radikal innovasjon

Algorithmic Accountability: Designing Governance for Responsible Digital Transformations

Alternative title: Algoritmisk Ansvarlighet: Retningslinjer for forsvarlig digital transformasjon

Awarded: NOK 10.0 mill.

AI and (self-learning) algorithms are increasingly used to support, accelerate and even replace human decision-making in various public and private arenas. Algorithms determine decisions in stock-trading and finance, fraud detection, scientific discovery, medical diagnostics and online match-making. Such decisions made by artificial intelligence systems are often implicit and invisible and they are linked to both intentional and unintentional consequences. This increasingly makes them objects of public concern and scrutiny. Against this background, this project offers a business ethics perspective on how social, commercial, and political actors on both a local and global scale can ensure accountability in algorithmic decision-making processes. Gathering a group of international researchers with expertise in law, internet studies, information systems, and management research, the project will conduct a multi-method and multi-stakeholder investigation to develop a comprehensive framework of the affordances, responsibilities, and outcomes of algorithmic decision-making. To this end, we will first draw a framework for accountable algorithmic decision-making grounded in the literature on legitimacy, participation, and inclusion. Second, we will systematically collect, map, and compare varying notions of algorithmic agency. Here, we will set a particular emphasis on the importance of co-constitution of algorithmic agency between organizations and their stakeholders. Third, we will develop actionable guidance towards creating accountable algorithmic decision-making processes, based on both explainable programming and comprehensible communication of decision-making rationales and data sources. Finally, as a practical deliverable, we create a normative model for evaluating accountability in algorithmic decision-making processes, examining to what extent the algorithms are transparent, provide proper dispute channels, and enable public oversight. In the first year of the project, the research team has started developing two empirical and one conceptual projects. The first project focused on the notions of algorithmic agency. A large-scale online experiment was conducted to identify how the public perceives the use of algorithms for decision making in organizations. The research team has focused on such outcomes as trustworthiness and transparency in the decisions making process that involves different teams: either humans, AI, or humans and AI. The research team has also investigated how the public evaluates the quality of outcomes of decision making process that involves the above-mentioned groups. In order to gain a more nuanced understanding of how the public perceives hybrid teams (i.e., teams consisting of humans and AI), the study included portrayal of AI in two different roles as a tool or as a colleague. Preliminary results of the study suggest that the public evaluates work outcomes of different teams as equally accurate. Hybrid and human teams are considered to be the most trustworthy, while the decision making process in hybrid teams is the most transparent. In the next steps of the project, additional studies will be conducted to investigate the responsibilities and outcomes of algorithmic decision-making as a result of algorithmic agency. The second empirical project further focuses on the notions of algorithmic agency attributed by the public. The research team is studying how the public perceives incivility expressed towards AI vs. towards humans. AI-enabled products and services (e.g., Siri, Alexa etc.) often tolerate incivility (e.g., impoliteness, rudeness, verbal and physical harassment etc.) directed towards them. The results of the online experiment conducted in this project suggest that the public is more tolerable of incivility expressed towards AI in contrast to when expressed towards people. In the next studies in this empirical project, the research team will explore the implications of being uncivil towards AI-enabled products and services. We intend to show that AI's technological affordances might promote normalization of incivility beyond interactions with AI. The third conceptual project is dedicated to studying how organizations communicate about the use of AI with general public and what implications different communication strategies have for organizational legitimacy. Research team studies how craft organizations such as design studios, restaurants etc. use AI. The research team has chosen crafts organizations because they heavily rely on creativity, knowledge, imagination, and even personality characteristics of craftsmen. The purpose of the project is to outline how the public evaluates legitimacy of such organizations when they use AI to supplement or replace labour of craftsmen.

Building on ongoing international discussions, and in continuation of the team’s previous work within the SAMANSVAR framework on fair labor on platforms and the gig economy, we want to address the socioeconomic effects of increasing adoption of artificial intelligence, algorithmic management, and smart automated systems, whereby increasing productivity gains for organisations are met with concerns about whether the speed of implementation can be matched with necessary levels of accountability and oversight. Our research shall result in an unifying framework for algorithmic accountability for algorithmic management which shall serve as a basis for organizations, regulators, and social communities to take actionable steps towards ensuring accountable algorithmic decision-making processes. To this end, we will first draw a framework for accountable algorithmic decision-making grounded in the literature on legitimacy, participation, and inclusion. Second, we will systematically collect, map, and compare varying notions of algorithmic agency. Here, we will set a particular emphasis on the importance of a ‘co-constitution’ of algorithmic agency between organizations and stakeholders. Third, we will develop actionable guidance towards creating accountable algorithmic decision-making, based on both explainable programming and comprehensible communication of decision-making rationales and data sources. As a practical deliverable, we shall create a normative model for evaluating accountability in algorithmic decision-making processes, examining to what extent the algorithms are transparent, provide proper dispute channels, and enable public oversight. Finally, we will develop a user-friendly accountability enhancing tool, the ‘Framework for Algorithmic Accountability’. The framework will be tested on real-world settings of algorithmic decision-making processes and can be utilised by researchers, activists or lay Internet users to challenge algorithmic systems.

Publications from Cristin

No publications found

No publications found

No publications found

Funding scheme:

TEKNOKONVERGENS-Teknologikonvergens - grensesprengende forskning og radikal innovasjon