Back to search

IKTPLUSS-IKT og digital innovasjon

AI4Users: Responsible Use of Artificial Intelligence through Design for Accountability and Intelligibility

Alternative title: AI4Users: Ansvarlig bruk av kunstig intelligens gjennom design (KI4Brukere)

Awarded: NOK 16.0 mill.

The AI4Users project takes a human-centered perspective on artificial intelligence (AI). The project takes as a starting point people as AI users. Humans are ultimately responsible for AI technologies. Humans must therefore be involved in a non-superficial way to ensure meaningful human control over AI. Intelligibility and accountability can help users understand, appropriately trust, and effectively manage the emerging generation of AI applications. Intelligibility means that the AI applications must be intelligible as to their state, the information they possess, their capabilities and limitations. This is linked to the overall problem of AI explainability. Some AI applications are such that we cannot describe how they actually work, e.g. why a given input produces a particular output. This is known as the «black box» problem which can impede the involvement of humans in shaping, operating and monitoring the use of AI in service delivery. Accountability is another key concern and relates to the whole AI-life cycle spanning development, use and performance monitoring. The project has reviewed existing research on accountability for AI. This analysis revealed that the most recent research in the field deals with similar issues that were covered by research carried out already in the 1980s (ensuring transparency, training users to understand the limitations of AI, determining legal status and areas of use). This is an indication that the same problems persist, but it also shows that there are gaps in the coverage of new questions and challenges. In the first phase of the project, a prototype for artificial intelligence in a public service was developed and evaluated with 40 citizens who are representative of different age groups. We found that citizens demand transparency and prefer systems that are not fully automated ("humans in the loop"). Over the past year, we delved into researching how public service employees envision working with artificial intelligence (AI). We conducted interviews and visits in work settings to understand their perspectives. We found that they see AI as a valuable partner that can help them perform their tasks more efficiently and proactively. Throughout the project we have engaged in educational activities about the responsible use of AI in public services. We have designed, implemented, and evaluated a course on Responsible AI applied to the context of public welfare services. The course was run over two semesters (in 2022 and 2023) with 49 students in total in NTNU and the plan is to run it again in 2024. We adopted an experiential learning paradigm and a course design that fosters multidisciplinary, and problem-oriented learning. Inspired by design thinking, our course protocol promotes continuous learning by building an arena for the students to reflect on the learning process and growing awareness of RAI. The course protocol has been published in the paper entitled «Educating about Responsible AI in IS: Designing a course based on Experiential Learning» which is nominated for the 2023 best educational paper award internationally in Information Systems Research by the Association for Information Systems (decision to be taken in December 2023). Further project activities are performed following the Action Design Research (ADR) approach. ADR is an iterative and adaptive research method closely linked to practice. The method emphasizes collaboration and is evaluation driven. The project includes a) design, prototyping and assessing tools enabling different categories of non-experts to maintain insight into AI applications, b) formalization of the design knowledge generated in actionable design principles, c) capacity building through collaborations between academia and the public sector nationally and internationally.

Infusing public services with AI solutions can contribute to efficiency and effectiveness improvements but this may come with an increase in opaqueness. This opaqueness can pose limits on involving humans in shaping, operating and monitoring the arrangements in place ensuring meaningful human control. The responsible use of AI entails ensuring algorithmic intelligibility and accountability. Intelligibility means that algorithms in use must be intelligible as to their state, the information they possess, their capabilities and limitations. Accountability means that it is possible to trace and identify responsibility for the results of algorithms. Both are required for using algorithms under human oversight. The AI4Users project will contribute to the responsible use of AI through the design and assessment of software tools and the formalisation of design principles for algorithmic accountability and intelligibility. The project takes a human-centred perspective addressing the needs of different groups implicated in AI-infused public services: citizens, case handlers at the operational level, middle managers and policy makers. The novelty of the AI4Users is that it targets specifically non-experts extending the reach of research beyond AI experts and data scientists. The use cases to be employed by the project will address different oversight scenarios including human-in-the-loop, human-on-the-loop and human-in-command. The User Organisation will be NAV and the project will be associated with NAV’s AI lab. The project research will be linked to NAV´s ongoing AI work and specific AI solutions under deployment. The project will seek access to case handlers in local NAV offices and NAV´s permanent local and national user committees (NAV Brukermedvirkning lokalt & nasjonalt). The overall aim is to advance the public infrastructures and contribute to introducing human-friendly and trustworthy artificial intelligence in practice.

Publications from Cristin

No publications found

No publications found

Funding scheme:

IKTPLUSS-IKT og digital innovasjon

Thematic Areas and Topics

InternasjonaliseringIKT forskningsområdeVisualisering og brukergrensesnittLTP3 Et kunnskapsintensivt næringsliv i hele landetPortefølje ForskningssystemetPolitikk- og forvaltningsområderArbeidsliv - Politikk og forvaltningPolitikk- og forvaltningsområderDigitaliseringIKT forskningsområdeIKT forskningsområdeKunstig intelligens, maskinlæring og dataanalyseBransjer og næringerArbeidArbeidslivPolitikk- og forvaltningsområderBransjer og næringerIKT-næringenPolitikk- og forvaltningsområderForskningKjønnsperspektiver i forskningPortefølje InnovasjonPolitikk- og forvaltningsområderNæring og handelPolitikk- og forvaltningsområderOffentlig administrasjon og forvaltningPortefølje Velferd og utdanningDigitalisering og bruk av IKTFornyelse og innovasjon i offentlig sektorForskning for fornyelse av offentlig sektorKjønnsperspektiver i forskningKjønn som perspektiv i problemstillingGrunnforskningIKT forskningsområdeMenneske, samfunn og teknologiLTP3 Innovasjon i stat og kommuneLTP3 Styrket konkurransekraft og innovasjonsevneDigitalisering og bruk av IKTOffentlig sektorDelportefølje Et velfungerende forskningssystemLTP3 IKT og digital transformasjonFornyelse og innovasjon i offentlig sektorDelportefølje InternasjonaliseringDelportefølje KjønnsperspektiverLTP3 Fagmiljøer og talenterDelportefølje KvalitetLTP3 Muliggjørende og industrielle teknologierPortefølje Banebrytende forskningBransjer og næringerAnnen tjenesteytingAnvendt forskningArbeidPolitikk- og forvaltningsområderLikestilling og inkluderingInternasjonaliseringInternasjonalt prosjektsamarbeidPortefølje Muliggjørende teknologierLTP3 Utenforskap, inkludering, kulturmøter og migrasjonLTP3 Høy kvalitet og tilgjengelighetLTP3 Tillit og fellesskap