"Efficient LLM Inference using Dynamic Input Pruning and Cache-Aware Masking."

Marco Federici et al. (2024)

Details and statistics

DOI: 10.48550/ARXIV.2412.01380

access: open

type: Informal or Other Publication

metadata version: 2025-01-12