Publication details

LLM's-side Bias

Authors

ONDRÁČEK Tomáš URBAŃSKI Mariusz ŁUPKOWSKI Paweł

Year of publication 2025
Type Appeared in Conference without Proceedings
MU Faculty or unit

Faculty of Economics and Administration

Citation
Description How effectively can large language models (LLMs) simulate argument evaluation? This paper explores the manifestation of myside bias (Stanovich, West, & Toplak, 2013) in synthetic probands—artificially generated entities modelled to resemble human reasoners. We examine whether synthetic probands exhibit biases similar to those observed in human reasoning, particularly in the context of argument evaluation. This inquiry extends beyond determining whether LLMs merely accept arguments as sound but also addresses their ability to assess argument validity (cf. Čavojová, Šrol, & Adamus, 2018).

You are running an old browser version. We recommend updating your browser to its latest version.

More info