Lars Erik Holmquist will give a talk at KTH The Royal Institute of Technology in Stockholm, Sweden and online on research about untruth in AI. Large Language Models (LLMs) have many practical uses in areas like journalism, search, coding and more. However, a growing concern is that they are also prone to presenting incorrect information. This is sometimes called “hallucinations”, but a more correct term would be “bullshit”, i.e. text produced without concern for its truth. In this study, we are not interested in what specific untruths LLMs presents, but how they do it. We used synthetic ethnography, a method for the qualitative study of generative models, to study two LLMs with different size and capability. We collected 3 cases where LLMs presented incorrect information and observed the strategies they used to justify this. From these observations we can start to form an understanding of what happens when an LLM reaches the edge of its knowledge-base and takes corrective action. Our conclusion is that the interfaces should be better designed to reveal this tendency of LLMs to “fill in” information they are missing, but that this ability may also be one of their strengths.
Time: Friday, October 31, 11.00-12.00 CET.
Information on how to join online or in person.
Related paper:
Holmquist, L.E. and Nemeth, S. 2025. “Don't believe anything I tell you, it's all lies!”: a synthetic ethnography on untruth in large language models. In CHI EA '25: extended abstracts of the CHI Conference on Human Factors in Computing Systems.