논문2 [논문] RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective AugmentationRetrieving documents and prepending them in-context at inference time improves performance of language model (LMs) on a wide range of tasks. However, these documents, often spanning hundreds of words, make inference substantially more expensive. We proposearxiv.orgAbstractRetrieving documents and prepending them.. 2025. 1. 9. [논문] Learning to Filter Context for Retrieval-Augmented Generation Learning to Filter Context for Retrieval-Augmented GenerationOn-the-fly retrieval of relevant knowledge has proven an essential element of reliable systems for tasks such as open-domain question answering and fact verification. However, because retrieval systems are not perfect, generation models are required to genarxiv.orgAbstractOn-the-fly retrieval of relevant knowledge has proven an essenti.. 2025. 1. 9. 이전 1 다음 반응형