Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs | Not Hacker News!