ACL1 [논문] Compressing Context to Enhance Inference Efficiency of Large Language Models Compressing Context to Enhance Inference Efficiency of Large Language ModelsYucheng Li, Bo Dong, Frank Guerin, Chenghua Lin. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023.aclanthology.orgAbstractLarge language models (LLMs) achieved remarkable performance across various tasks. However, they face challenges in managing long documents and extended con.. 2025. 1. 9. 이전 1 다음 반응형