Back to search

IKTPLUSS-IKT og digital innovasjon

Ethical risks assessmeNt of Artificial intelligenCe in pracTice

Alternative title: Praktiske risikovurderinger for kunstig intelligens

Awarded: NOK 12.0 mill.

The use of artificial intelligence in working life raises a number ethical challenges, which are currently handled through overarching guidelines and organisational policies. These guidelines are often general and abstract and of little practical value to those who will apply them in their work. The Enact project will develop training methods and enable the overarching guidelines for the ethical use of artificial intelligence (AI) in Norwegian working life to be operationalised so that they can function in practice. The level of ethical AI understanding (Ethical AI literacy) is mapped for different groups using AI in their work. Enact will create a methodology and training program based on international guidelines to assess ethical risks and identify ethical dilemmas when using AI. The training program will be developed and tailored with actual ethical challenges the identified by participants. They will describe and model the ethical challenges as dilemmas. The risks and dilemmas that are found must then be converted into practical guidelines that can be included in the organizations' management system for the use of artificial intelligence by the workers. The methodology will be further developed to be configured in all Norwegian companies that use artificial intelligence and help them raise their competence in ethics and artificial intelligence, establish guidelines and a management structure for artificial intelligence in the enterprises. The Enact project is composed of Norwegian companies that use artificial intelligence today in areas such as health (Medsensio), welfare (NAV), finance (DNB), training (Hypatia Learning AS), transport (Posten) as well as leading research institutes on artificial intelligence and ethics (NORA, SINTEF, NTNU, Østfold University College). An international advisory board has been established. (University of Agder, Imperial College London, University College of London, Smart Innovation Norway/Cluster of applied AI and NORDE).

ENACT targets at designing, developing, evaluating, revising, and establishing a methodology as a tool for efficiently translating and integrating ethical principles/guidelines in AI-based systems for mitigating ethical risks when implementing such systems, tailored for Norwegian organizations/businesses. The ENACT project will investigate the ethical AI literacy of stakeholders (i.e., developers, managers, end-users within the organizations/businesses) and students, and will create a training program and a course focused on the assessment of ethical risks of AI (as they have been listed in available guidelines, e.g., ALTAI), enhanced by the participatory design of relevant content with the stakeholders/students. In the program/course, ENACT will showcase and evaluate a methodology for conducting the assessment of ethical risks arising from AI, by converting the ethical principles into practical guidelines. The methodology engages stakeholders/students in co-developing a common formulation and a shared modeling of the ethical aspects of AI as ethical dilemmas. Those ethical aspects are important values in the culture/context of the organization, thus the risks need to be constructively discussed and mitigated as organic part of the transition to practicing digital ethics. Ultimately, ENACT aims to establish the suggested methodology as a fit-for-purpose approach, tailored for Norwegian organizations/businesses, when deploying their own AI-based systems, by combining employee-driven innovation with application of Ethics by Design. To achieve the above-envisioned outcomes, ENACT will exploit a novel, multidisciplinary and participatory approach for digital ethics training at all levels. This will set the foundations for a cross-functional AI-governance structure (i.e., a robust process to clarify how decisions around ethical AI are made and documented) that rely on professional accountability mechanisms and ethical AI literacy of all stakeholders in the organizations.

Funding scheme:

IKTPLUSS-IKT og digital innovasjon

Thematic Areas and Topics