Cutting LLM Batch Inference Time by Half with Dynamic Prefix Bucketing | Not Hacker News!