The EU, NATO and UN in addition to 1500 experts consulted by the World Economic Forum perceive disinformation and fake news, and the subsequent erosion of trust in newly elected governments, as one of the biggest current threats to democracy. This is so not least due to the rapid development of generative AI. Thus, pressing knowledge needs are 1) forecasts about the next-generation disinformation and influence operations and 2) a basis for the development of tools to identify such disinformation. To fill these knowledge gaps is the objective of NxtGenFake. Thus, in NxtGenFake, linguists and media scientists at the University of Oslo (ILOS and IMK, respectively) and computer scientists at SINTEF will collaborate, taking as our point of departure a selection of disinformation narratives gleaned from a selection of online sources in English, Norwegian and Russian. Based on, inter alia, analyses comparing these narratives with the datasets of genuine news from the Fakespeak project we will test various methods for prompting a selection of large language models (LLMs) to generate texts reflecting disinformation narratives in all three languages. Applying quantitative and qualitative methods, our media scientists will examine the disinformation narratives in view of, i.a., their content and discursive features, and our linguists will investigate their grammatical, stylistic and pragmatic features. Based on the results our computer scientists will conduct systematic assessments of how LLMs may increase the persuasiveness of disinformation, thereby demonstrating to our stakeholders what future disinformation campaigns may look like, enabling them to prepare and take measures to strengthen societal resilience against such operations. In a final step the computer scientists will work towards the development of cutting-edge tools that can identify suspicious AI-generated textual content that may be part of state or non-state influence operations and disinformation campaigns.
The EU, NATO and UN in addition to 1500 experts consulted by the World Economic Forum perceive disinformation and fake news, and the subsequent erosion of trust in newly elected governments, as one of the biggest current threats to democracy. This is so not least due to the rapid development of generative AI. Thus, pressing knowledge needs are 1) forecasts about the next-generation disinformation and influence operations and 2) a basis for the development of tools to identify and warn about such disinformation. To fill these knowledge gaps is the objective of NxtGenFake. Thus, in NxtGenFake, linguists, media scientists and computer scientists will collaborate, taking as our point of departure a selection of mainly Russian state disinformation narratives gleaned from a selection of online sources in English, Norwegian and Russian. Based on an abstraction of these narratives and analyses comparing them with the datasets of genuine news from the Fakespeak project we will test various methods for prompting a selection of large language models (LLMs) to generate texts reflecting disinformation narratives in all three languages. Applying quantitative and qualitative methods, our media scientists will examine the disinformation narratives in view of, i.a., their content and discursive features, and our linguists will investigate their grammatical, stylistic and pragmatic features. Based on the results our computer scientists will conduct systematic assessments of how LLMs may increase the persuasiveness of disinformation, thereby demonstrating to our stakeholders what future disinformation campaigns may look like, enabling them to prepare and take measures to strengthen societal resilience against such operations. In a final step the computer scientists will work towards the development of cutting-edge tools that can warn relevant authorities of suspicious AI-generated textual content that may be part of state or non-state influence operations and disinformation campaigns.