You are here:
Publication details
LLM's-side Bias
| Authors | |
|---|---|
| Year of publication | 2025 |
| Type | Appeared in Conference without Proceedings |
| MU Faculty or unit | |
| Citation | |
| Description | How effectively can large language models (LLMs) simulate argument evaluation? This paper explores the manifestation of myside bias (Stanovich, West, & Toplak, 2013) in synthetic probands—artificially generated entities modelled to resemble human reasoners. We examine whether synthetic probands exhibit biases similar to those observed in human reasoning, particularly in the context of argument evaluation. This inquiry extends beyond determining whether LLMs merely accept arguments as sound but also addresses their ability to assess argument validity (cf. Čavojová, Šrol, & Adamus, 2018). |