"Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-Serve."

Amey Agrawal et al. (2024)

Details and statistics

DOI:

access: open

type: Conference or Workshop Paper

metadata version: 2024-07-16