Author:
Kubrak Kateryna,Botchorishvili Lana,Milani Fredrik,Nolte Alexander,Dumas Marlon
Abstract
AbstractPrescriptive process monitoring (PrPM) systems analyze ongoing business process instances to recommend real-time interventions that optimize performance. The usefulness of these systems hinges on users applying the generated recommendations. Thus, users need to understand the rationale behind these recommendations. One way to build this understanding is to enhance each recommendation with explanations. Existing approaches generate explanations consisting of static text or plots, which users often struggle to understand. Previous work has shown that dialogue systems enhance the effectiveness of explanations in recommender systems. Large Language Models (LLMs) are an emerging technology that facilitates the construction of dialogue systems. In this paper, we investigate the applicability of LLMs for generating explanations in PrPM systems. Following a design science approach, we elicit explainability questions that users may have for PrPM outputs, we design a prompting method on this basis, and we conduct an evaluation with potential users to assess their perception of the explanations and their approach to interact with the system. The results indicate that LLMs can help users of PrPM systems to better understand the origin of the recommendations, and to produce recommendations that have sufficient detail and fulfill their expectations. On the other hand, users find that the explanations do not always address the “why” of a recommendation and do not let them judge if they can trust the recommendation.
Publisher
Springer Nature Switzerland