Human Aspects of Machine Learning Social Foundations of Computation Unpublished 2023

Challenging the validity of personality tests for large language models

With large language models (LLMs) like GPT-4 appearing to behave increasingly human-like in text-based interactions, it has become popular to attempt to evaluate personality traits of LLMs using questionnaires originally developed for humans. While reusing measures is a resource-efficient way to evaluate LLMs, careful adaptations are usually required to ensure that assessment results are valid even across human subpopulations. In this work, we provide evidence that LLMs’ responses to personality tests systematically deviate from human responses, implying that the results of these tests cannot be interpreted in the same way. Concretely, reversecoded items (“I am introverted” vs.“I am extraverted”) are often both answered affirmatively. Furthermore, variation across prompts designed to “steer” LLMs to simulate particular personality types does not follow the clear separation into five independent personality factors from human samples. In light of these results, we believe that it is important to investigate tests’ validity for LLMs before drawing strong conclusions about potentially ill-defined concepts like LLMs’“personality”.

Author(s): Tom, Sühr and Florian, Dorner and Samadi, Samira and Kelava, Augustin
Year: 2023
Bibtex Type: Unpublished (unpublished)
How Published: arXiv
State: Submitted

BibTex

@unpublished{TomLLM,
  title = {Challenging the validity of personality tests for large language models},
  abstract = {With large language models (LLMs) like GPT-4 appearing to behave increasingly human-like in text-based interactions, it has become popular to attempt to evaluate personality traits of LLMs using questionnaires originally developed for humans. While reusing measures is a resource-efficient way to evaluate LLMs, careful adaptations are usually required to ensure that assessment results are valid even across human subpopulations. In this work, we provide evidence that LLMs’ responses to personality tests systematically deviate from human responses, implying that the results of these tests cannot be interpreted in the same way. Concretely, reversecoded items (“I am introverted” vs.“I am extraverted”) are often both answered affirmatively. Furthermore, variation across prompts designed to “steer” LLMs to simulate particular personality types does not follow the clear separation into five independent personality factors from human samples. In light of these results, we believe that it is important to investigate tests’ validity for LLMs before drawing strong conclusions about potentially ill-defined concepts like LLMs’“personality”.},
  howpublished = {arXiv},
  year = {2023},
  slug = {tomllm},
  author = {Tom, S{\"u}hr and Florian, Dorner and Samadi, Samira and Kelava, Augustin}
}