Can LLMs Flip Coins in Their Heads?
Researchers tested whether large language models can simulate random coin flips in their reasoning. The study found that while models can produce seemingly random outputs, they struggle with true probabilistic reasoning and exhibit systematic biases. This reveals limitations in how LLMs handle uncertainty and randomness in their internal processes.