We're training LLMs to hallucinate by rewarding them for guessing | Not Hacker News!