Back to search

NAERINGSPH-Nærings-phd

Towards robust Large Language Models for agent-based systems

Alternative title: Mot robuste store språkmodeller for agentbaserte systemer

Awarded: NOK 2.1 mill.

Project Manager:

Project Number:

354245

Application Type:

Project Period:

2024 - 2027

Funding received from:

Location:

Misinformation and disinformation dissemination pose a democratic threat and a major hurdle, leading to societal divisions and impacting political and public environments worldwide. This issue arises from unintentional mistakes by content creators and AI-generated content, as well as deliberate manipulation using AI. In this project, we will focus on factuality in large language models (LLMs) in the light of agent-based systems. LLMs have advanced capabilities but can produce errors leading to legal and financial consequences. LLMs are used in agent-based systems for autonomous tasks, human-like interactions, and decision-making. Agents powered by LLMs can understand and generate human language, enabling complex interactions like scheduling appointments and troubleshooting. Think of the movie Her and the interactions the main characters have with different agents throughout their daily lives. However, the accuracy and reliability of these agent-based systems are directly tied to the performance of the underlying LLMs. Despite their advanced capabilities, LLMs can still produce incorrect or inconsistent information, which can compromise the effectiveness of the agent-based system

Dissemination of misinformation and disinformation poses a democratic threat and a major hurdle, leading to societal divisions and impacting political and public environments worldwide. This issue arises from unintentional mistakes by content creators and AI-generated content, as well as deliberate manipulation using AI. In this project we will focus on factuality in Large Language Models (LLMs) in the light of the agent-based systems. LLMs have advanced capabilities but can produce errors leading to legal and financial consequences. LLMs are used in agent-based systems for autonomous tasks, human-like interactions, and decision-making. Agents powered by LLMs can understand and generate human language, enabling complex interactions like scheduling appointments and troubleshooting. Think of the movie Her and the interactions main characters have with different agents throughout their daily lives. However, the accuracy and reliability of these agent-based systems are directly tied to the performance of the underlying LLMs. Despite their advanced capabilities, LLMs can still produce incorrect or inconsistent information, which can compromise the effectiveness of the agent-based system. This vulnerability can manifest in various ways, and the damage could be irreversible. In conclusion, while LLM agent-based systems offer significant advancements in automating tasks and interactions, their susceptibility to errors necessitates robust measures to ensure accuracy and reliability. By addressing these vulnerabilities, we can harness the full potential of LLM agents while minimizing the risks associated with their deployment.

Funding scheme:

NAERINGSPH-Nærings-phd