BACKGROUND
Mentalization, integral to human cognitive processes, pertains to the interpretation of one's own and others' mental states, including emotions, beliefs, and intentions. With the advent of artificial intelligence (AI) and the prominence of large language models (LLMs) in mental health applications, questions persist about their aptitude in emotional comprehension. The prior iteration, ChatGPT-3.5, demonstrated an advanced capacity to interpret emotions from textual data, surpassing human benchmarks. Given the introduction of ChatGPT-4, with its enhanced visual processing capabilities, and considering Bard's existing visual functionalities, a rigorous assessment of their proficiency in visual mentalizing is warranted.
OBJECTIVE
The aim of the research was to critically evaluate the capabilities of ChatGPT-4 and Google Bard with regard to their competence in discerning visual mentalizing indicators as contrasted with their textual-based mentalizing abilities.
METHODS
We employed the esteemed Reading the Mind in the Eyes Test (RMET) developed by Baron-Cohen to assess the models' proficiency in interpreting visual emotional indicators. Simultaneously, the Levels of Emotional Awareness Scale (LEAS) was utilized to evaluate the LLMs aptitude in textual mentalizing. Collating data from both tests provided a holistic view of the mentalizing capabilities of ChatGPT-4 and Bard.
RESULTS
• ChatGPT-4 RMET. ChatGPT-4, displaying a pronounced ability in emotion recognition, secured scores of 26 and 27 in two distinct evaluations, significantly deviating from a random response paradigm. These scores align with established benchmarks from the broader human demographic. Notably, ChatGPT-4 exhibited consistent responses, with no discernible biases pertaining to the gender of the model or nature of the emotion.
• Google Bard RMET. By contrast, Bard's performance aligned with random response patterns, securing scores of 10 and 12, rendering further detailed analysis redundant.
• LEAS: In the domain of textual analysis, both ChatGPT and Bard surpassed established benchmarks from the general population, with their performances being remarkably congruent.
CONCLUSIONS
ChatGPT-4 proved its efficacy in the domain of visual mentalizing, aligning closely with human performance standards. Although both models displayed commendable acumen in textual emotion interpretation, Bard's capabilities in visual emotion interpretation necessitate further scrutiny and potential refinement.