Prompt caching: 10x cheaper LLM tokens | Not Hacker News!