ARMADA

Reliable Conversational Domain-specific Data Exploration and Analysis

The rapid adoption of LLMs has outpaced the development of techniques for evaluating their output quality. This oversight is crucial because LLMs have been shown to be prone to producing what is known as “hallucination” output, namely plausible responses that nonetheless are factually incorrect or inconsistent with the user intent. Therefore, relying on LLM output without proper assessment may have severe consequences.

The ARMADA – Reliable Conversational Domain-Specific Data Exploration and Analysis – doctoral network aims at training 15 versatile and interconnected Early Stage Researchers (ESRs) to specialize in the overarching area of Conversational Artificial Intelligence (Conversational AI) and the challenges associated to the recent advances in developing Large Language Models (LLMs). These specialists will acquire unique knowledge and skills in Artificial Intelligence, Natural Language Processing, Machine Learning, Data Management, and Algorithms Design to improve the reliability of LLMs. A reliable LLM will produce timely, consistent, and verifiable answers, and provide guidance to the user. This program tackles the crucial EU needs for regulating AI by offering to train experts in the area of Conversational AI that will potentially advise EU bodies on technical matters related to the adoption of these technologies in critical disciplines, such as medicine, education, and business intelligence.