Back to search

IKTPLUSS-IKT og digital innovasjon

Ethical risks assessmeNt of Artificial intelligenCe in pracTice

Alternative title: Praktiske risikovurderinger for kunstig intelligens

Awarded: NOK 12.0 mill.

The use of artificial intelligence in working life raises a number of ethical challenges, which are currently largely addressed through overarching guidelines. Such guidelines are often general and abstract and with little practical value for those who will apply them in their work. The Enact project will develop training methods that enable the general guidelines to work in practice. This is done by first mapping the level of understanding of ethical challenges related to AI, the so-called Ethical AI Literacy. Such surveys will also serve as a tool for interested companies beyond the project. Enact will then create a training program based on the CORAS methodology to assess ethical dilemmas when using artificial intelligence. The training program will be further developed and tailored to the actual ethical challenges the participants identify from their activities. Risks found must then be converted into practical guidelines that can be included in the organisations' management systems for the use of artificial intelligence. The methodology will be further developed so that it can be used in all Norwegian enterprises that use artificial intelligence and help them to raise their competence in ethics and artificial intelligence, establish guidelines and a governance structure for artificial intelligence in the enterprises. The project has now developed the EAIL methodology and is working on adapting CORAS and tailoring the training. The Enact project is composed of Norwegian companies that use artificial intelligence today in areas such as health, welfare (NAV), finance (DNB), training (Hypatia Learning AS), transport (Posten) as well as leading research environments on artificial intelligence and ethics (SINTEF, NTNU, Østfold University College). An international expert group has been established for the project. (University of Agder, Imperial College London, University College of London, Smart Innovation Norway/Cluster of Applied AI and NORDE).

ENACT targets at designing, developing, evaluating, revising, and establishing a methodology as a tool for efficiently translating and integrating ethical principles/guidelines in AI-based systems for mitigating ethical risks when implementing such systems, tailored for Norwegian organizations/businesses. The ENACT project will investigate the ethical AI literacy of stakeholders (i.e., developers, managers, end-users within the organizations/businesses) and students, and will create a training program and a course focused on the assessment of ethical risks of AI (as they have been listed in available guidelines, e.g., ALTAI), enhanced by the participatory design of relevant content with the stakeholders/students. In the program/course, ENACT will showcase and evaluate a methodology for conducting the assessment of ethical risks arising from AI, by converting the ethical principles into practical guidelines. The methodology engages stakeholders/students in co-developing a common formulation and a shared modeling of the ethical aspects of AI as ethical dilemmas. Those ethical aspects are important values in the culture/context of the organization, thus the risks need to be constructively discussed and mitigated as organic part of the transition to practicing digital ethics. Ultimately, ENACT aims to establish the suggested methodology as a fit-for-purpose approach, tailored for Norwegian organizations/businesses, when deploying their own AI-based systems, by combining employee-driven innovation with application of Ethics by Design. To achieve the above-envisioned outcomes, ENACT will exploit a novel, multidisciplinary and participatory approach for digital ethics training at all levels. This will set the foundations for a cross-functional AI-governance structure (i.e., a robust process to clarify how decisions around ethical AI are made and documented) that rely on professional accountability mechanisms and ethical AI literacy of all stakeholders in the organizations.

Funding scheme:

IKTPLUSS-IKT og digital innovasjon