|[]()<br>[From Reading to Compressing: Exploring the Multi-document Reader for Prompt Compression](https://arxiv.org/abs/2410.04139)<br> Eunseong Choi, Sunkyung Lee, Minjin Choi, June Park, Jongwuk Lee |<imgwidth="1002"alt="image"src="https://arxiv.org/html/2410.04139v1/extracted/5902409/Figures/fig_R2C_framework_2col_v4.png"> |[Paper](https://arxiv.org/abs/2410.04139)|[//]: #10/14
|[]()<br>[From Reading to Compressing: Exploring the Multi-document Reader for Prompt Compression](https://arxiv.org/abs/2410.04139)<br> Eunseong Choi, Sunkyung Lee, Minjin Choi, June Park, Jongwuk Lee |<imgwidth="1002"alt="image"src="https://arxiv.org/html/2410.04139v1/extracted/5902409/Figures/fig_R2C_framework_2col_v4.png"> |[Paper](https://arxiv.org/abs/2410.04139)|[//]: #10/14
|[Perception Compressor:A training-free prompt compression method in long context scenarios](https://arxiv.org/abs/2409.19272)<br> Jiwei Tang, Jin Xu, Tingwei Lu, Hai Lin, Yiming Zhao, Hai-Tao Zheng |<imgwidth="1002"alt="image"src="https://arxiv.org/html/2409.19272v1/x1.png"> |[Paper](https://arxiv.org/abs/2409.19272)|[//]: #10/02
|[Perception Compressor:A training-free prompt compression method in long context scenarios](https://arxiv.org/abs/2409.19272)<br> Jiwei Tang, Jin Xu, Tingwei Lu, Hai Lin, Yiming Zhao, Hai-Tao Zheng |<imgwidth="1002"alt="image"src="https://arxiv.org/html/2409.19272v1/x1.png"> |[Paper](https://arxiv.org/abs/2409.19272)|[//]: #10/02
| [](https://github.com/Workday/cpc)<br>[Prompt Compression with Context-Aware Sentence Encoding for Fast and Improved LLM Inference](https://arxiv.org/abs/2409.01227)<br> Barys Liskavets, Maxim Ushakov, Shuvendu Roy, Mark Klibanov, Ali Etemad, Shane Luke |<imgwidth="1002"alt="image"src="https://arxiv.org/html/2409.01227v3/x1.png"> |[Github](https://github.com/Workday/cpc)<br>[Paper](https://arxiv.org/abs/2409.01227)|[//]: #12/30
| [Task-agnostic Prompt Compression with Context-aware Sentence Embedding and Reward-guided Task Descriptor](https://arxiv.org/abs/2502.13374v1)<br> Barys Liskavets, Shuvendu Roy, Maxim Ushakov, Mark Klibanov, Ali Etemad, Shane Luke |<imgwidth="1002"alt="image"src="https://arxiv.org/html/2502.13374v1/x2.png"> | [Paper](https://arxiv.org/abs/2502.13374v1)|[//]: #12/30
|[A Silver Bullet or a Compromise for Full Attention? A Comprehensive Study of Gist Token-based Context Compression](https://arxiv.org/abs/2412.17483)<br> Chenlong Deng, Zhisong Zhang, Kelong Mao, Shuaiyi Li, Xinting Huang, Dong Yu, Zhicheng Dou |<imgwidth="1002"alt="image"src="https://arxiv.org/html/2412.17483v1/x1.png"> |[Paper](https://arxiv.org/abs/2412.17483)|[//]: #12/30
|[A Silver Bullet or a Compromise for Full Attention? A Comprehensive Study of Gist Token-based Context Compression](https://arxiv.org/abs/2412.17483)<br> Chenlong Deng, Zhisong Zhang, Kelong Mao, Shuaiyi Li, Xinting Huang, Dong Yu, Zhicheng Dou |<imgwidth="1002"alt="image"src="https://arxiv.org/html/2412.17483v1/x1.png"> |[Paper](https://arxiv.org/abs/2412.17483)|[//]: #12/30
|[](https://github.com/alipay/L3TC-leveraging-rwkv-for-learned-lossless-low-complexity-text-compression)<br>[L3TC: Leveraging RWKV for Learned Lossless Low-Complexity Text Compression](https://arxiv.org/abs/2412.16642)<br> Junxuan Zhang, Zhengxue Cheng, Yan Zhao, Shihao Wang, Dajiang Zhou, Guo Lu, Li Song |<imgwidth="1002"alt="image"src="https://arxiv.org/html/2412.16642v2/x2.png"> |[Github](https://github.com/alipay/L3TC-leveraging-rwkv-for-learned-lossless-low-complexity-text-compression)<br>[Paper](https://arxiv.org/abs/2412.16642)|[//]: #12/30
|[](https://github.com/alipay/L3TC-leveraging-rwkv-for-learned-lossless-low-complexity-text-compression)<br>[L3TC: Leveraging RWKV for Learned Lossless Low-Complexity Text Compression](https://arxiv.org/abs/2412.16642)<br> Junxuan Zhang, Zhengxue Cheng, Yan Zhao, Shihao Wang, Dajiang Zhou, Guo Lu, Li Song |<imgwidth="1002"alt="image"src="https://arxiv.org/html/2412.16642v2/x2.png"> |[Github](https://github.com/alipay/L3TC-leveraging-rwkv-for-learned-lossless-low-complexity-text-compression)<br>[Paper](https://arxiv.org/abs/2412.16642)|[//]: #12/30
|[](https://github.com/NL2G/promptoptme)<br>[PromptOptMe: Error-Aware Prompt Compression for LLM-based MT Evaluation Metrics](https://arxiv.org/abs/2412.16120)<br> Daniil Larionov, Steffen Eger |<imgwidth="1002"alt="image"src="https://arxiv.org/html/2412.16120v1/x1.png"> |[Github](https://github.com/NL2G/promptoptme)<br>[Paper](https://arxiv.org/abs/2412.16120)|[//]: #12/30
|[](https://github.com/NL2G/promptoptme)<br>[PromptOptMe: Error-Aware Prompt Compression for LLM-based MT Evaluation Metrics](https://arxiv.org/abs/2412.16120)<br> Daniil Larionov, Steffen Eger |<imgwidth="1002"alt="image"src="https://arxiv.org/html/2412.16120v1/x1.png"> |[Github](https://github.com/NL2G/promptoptme)<br>[Paper](https://arxiv.org/abs/2412.16120)|[//]: #12/30
| [](https://github.com/Workday/cpc)<br>[Prompt Compression with Context-Aware Sentence Encoding for Fast and Improved LLM Inference](https://arxiv.org/abs/2409.01227)<br> Barys Liskavets, Maxim Ushakov, Shuvendu Roy, Mark Klibanov, Ali Etemad, Shane Luke |<imgwidth="1002"alt="image"src="https://arxiv.org/html/2409.01227v3/x1.png"> |[Github](https://github.com/Workday/cpc)<br>[Paper](https://arxiv.org/abs/2409.01227)|[//]: #12/30
| [Task-agnostic Prompt Compression with Context-aware Sentence Embedding and Reward-guided Task Descriptor](https://arxiv.org/abs/2502.13374v1)<br> Barys Liskavets, Shuvendu Roy, Maxim Ushakov, Mark Klibanov, Ali Etemad, Shane Luke |<imgwidth="1002"alt="image"src="https://arxiv.org/html/2502.13374v1/x2.png"> | [Paper](https://arxiv.org/abs/2502.13374v1)|[//]: #12/30