Back to search

BIA-Brukerstyrt innovasjonsarena

EXPLAIN: Explainable AI for automated fact-checking and additional insight - when facts are not enough

Alternative title: EXPLAIN: Forklarbar AI for automatisert faktasjekking og ytterligere innsikt - når fakta ikke er nok

Awarded: NOK 6.3 mill.

Project Number:

337133

Project Period:

2022 - 2024

Funding received from:

Partner countries:

Factiverse is launching an innovation effort to automate detection of misinformation and deliberate disinformation, and provide a credibility check for any online information by using cutting-edge AI and NLP. The project will integrate explanations of AI decisions and data sources into the company’s AI editor and software tools, catering to research, news production, and news consumption. Today Factiverse’s algorithms are able to show whether a claim is disputed or not, however, B2B-decision makers need credible insight into what decisions lie behind AI-models to detect biased data, lack of data or simply to tune algorithms. It is increasingly important for many industries to demystify the black box of AI-models. The solution will implement ML models which are agnostic to specific NLP tasks but provide explanations and are interpretable by design. We will implement both local and global explanation methods to understand the behavior and weaknesses of the ML models. When refining the use of AI-powered systems, it is also important to understand the risks of deploying a model even before user interaction. For this purpose, we have identified a gap in existing tools for debugging and monitoring ML models - arguing that explainability should be built into these tools. Factiverse is the natural project leader and administratively responsible - with also the largest incentives tied to a successful outcome. The project plan is matured together with partners both in the media and finance industry to include end-user perspectives, requirements and operational insight. A particular focus is directed toward curbing the negative effects of misinformation and disinformation on sustainability and the green transition.

Factiverse’s mission is to automate detection of misinformation and deliberate disinformation, and provide a credibility check for any online information by using cutting-edge AI and NLP. The proposed project will integrate explanations of AI decisions and data sources into the company’s AI editor and software tools, catering to research, news production, and news consumption. Today Factiverse’s algorithms are able to show whether a claim is disputed or not, however B2B-decision makers need credible insight into what decisions lie behind AI-models to detect biased data, lack of data or simply to tune algorithms. It is increasingly important for many industries to demystify the black box of AI-models. The solution will implement ML models which are agnostic to specific NLP tasks but provide explanations and are interpretable by design. We will implement both local and global explanation methods to understand the behavior and weaknesses of the ML models. When refining the use of AI-powered systems, it is also important to understand the risks of deploying a model even before user interaction. For this purpose, we have identified a gap in existing tools for debugging and monitoring ML models - arguing that explainability should be built into these tools. Stakeholders include newsrooms, investment companies, industry associations and authorities, as well as online newsreaders and society at large. A successful development and commercialization effort would unlock a significant national and international market opportunity. Fact-checking is a relatively new industry, and a knowledge-based value chain is emerging. The project's research challenges are anchored with prospective future clients both from the media and financial industry. A particular focus is directed to curb the negative effects of misinformation and disinformation on sustainability and the green transition.

Funding scheme:

BIA-Brukerstyrt innovasjonsarena