diff --git a/README.md b/README.md
index 45f0edb0417711dc2a58744392f571c58e35d2f5..c34e9c28defc0ef3ca220f174a66034a529063da 100644
--- a/README.md
+++ b/README.md
@@ -259,6 +259,8 @@ For each topic, we have curated a list of recommended papers that have garnered
 |[![Publish](https://img.shields.io/badge/Conference-EMNLP'24%20Findings-blue)]()<br>[Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability](https://arxiv.org/abs/2410.11786) <br> Tsz Ting Chung, Leyang Cui, Lemao Liu, Xinting Huang, Shuming Shi, Dit-Yan Yeung |<img width="202" alt="image" src="https://arxiv.org/html/2410.11786v1/x1.png"> |[Paper](https://arxiv.org/abs/2410.11786)|[//]: #10/21
 |[![Publish](https://img.shields.io/badge/Conference-EMNLP'24%20Findings-blue)]()<br>[From Reading to Compressing: Exploring the Multi-document Reader for Prompt Compression](https://arxiv.org/abs/2410.04139) <br> Eunseong Choi, Sunkyung Lee, Minjin Choi, June Park, Jongwuk Lee |<img width="1002" alt="image" src="https://arxiv.org/html/2410.04139v1/extracted/5902409/Figures/fig_R2C_framework_2col_v4.png"> |[Paper](https://arxiv.org/abs/2410.04139)|[//]: #10/14
 |[Perception Compressor:A training-free prompt compression method in long context scenarios](https://arxiv.org/abs/2409.19272) <br> Jiwei Tang, Jin Xu, Tingwei Lu, Hai Lin, Yiming Zhao, Hai-Tao Zheng |<img width="1002" alt="image" src="https://arxiv.org/html/2409.19272v1/x1.png"> |[Paper](https://arxiv.org/abs/2409.19272)|[//]: #10/02
+| [![Star](https://img.shields.io/github/stars/Workday/cpc.svg?style=social&label=Star)](https://github.com/Workday/cpc)![Publish](https://img.shields.io/badge/Conference-AAAI'25-blue)<br>[Prompt Compression with Context-Aware Sentence Encoding for Fast and Improved LLM Inference](https://arxiv.org/abs/2409.01227) <br> Barys Liskavets, Maxim Ushakov, Shuvendu Roy, Mark Klibanov, Ali Etemad, Shane Luke |<img width="1002" alt="image" src="https://arxiv.org/html/2409.01227v3/x1.png"> |[Github](https://github.com/Workday/cpc) <br> [Paper](https://arxiv.org/abs/2409.01227)|[//]: #12/30
+| [Task-agnostic Prompt Compression with Context-aware Sentence Embedding and Reward-guided Task Descriptor](https://arxiv.org/abs/2502.13374v1) <br> Barys Liskavets, Shuvendu Roy, Maxim Ushakov, Mark Klibanov, Ali Etemad, Shane Luke |<img width="1002" alt="image" src="https://arxiv.org/html/2502.13374v1/x2.png"> | [Paper](https://arxiv.org/abs/2502.13374v1)|[//]: #12/30
 
 ### Low-Rank Decomposition
 | Title & Authors | Introduction | Links |
diff --git a/text_compression.md b/text_compression.md
index 6f54a6b4dfd3adf742fd411ec3f1d66ee1b8df6e..c7228c98e9957b1ec1137433fdba935a49a160a9 100644
--- a/text_compression.md
+++ b/text_compression.md
@@ -39,3 +39,5 @@
 |[A Silver Bullet or a Compromise for Full Attention? A Comprehensive Study of Gist Token-based Context Compression](https://arxiv.org/abs/2412.17483) <br> Chenlong Deng, Zhisong Zhang, Kelong Mao, Shuaiyi Li, Xinting Huang, Dong Yu, Zhicheng Dou |<img width="1002" alt="image" src="https://arxiv.org/html/2412.17483v1/x1.png"> |[Paper](https://arxiv.org/abs/2412.17483)|[//]: #12/30
 |[![Star](https://img.shields.io/github/stars/alipay/L3TC-leveraging-rwkv-for-learned-lossless-low-complexity-text-compression.svg?style=social&label=Star)](https://github.com/alipay/L3TC-leveraging-rwkv-for-learned-lossless-low-complexity-text-compression)<br>[L3TC: Leveraging RWKV for Learned Lossless Low-Complexity Text Compression](https://arxiv.org/abs/2412.16642) <br> Junxuan Zhang, Zhengxue Cheng, Yan Zhao, Shihao Wang, Dajiang Zhou, Guo Lu, Li Song |<img width="1002" alt="image" src="https://arxiv.org/html/2412.16642v2/x2.png"> |[Github](https://github.com/alipay/L3TC-leveraging-rwkv-for-learned-lossless-low-complexity-text-compression) <br> [Paper](https://arxiv.org/abs/2412.16642)|[//]: #12/30
 |[![Star](https://img.shields.io/github/stars/NL2G/promptoptme.svg?style=social&label=Star)](https://github.com/NL2G/promptoptme)<br>[PromptOptMe: Error-Aware Prompt Compression for LLM-based MT Evaluation Metrics](https://arxiv.org/abs/2412.16120) <br> Daniil Larionov, Steffen Eger |<img width="1002" alt="image" src="https://arxiv.org/html/2412.16120v1/x1.png"> |[Github](https://github.com/NL2G/promptoptme) <br> [Paper](https://arxiv.org/abs/2412.16120)|[//]: #12/30
+| [![Star](https://img.shields.io/github/stars/Workday/cpc.svg?style=social&label=Star)](https://github.com/Workday/cpc)![Publish](https://img.shields.io/badge/Conference-AAAI'25-blue)<br>[Prompt Compression with Context-Aware Sentence Encoding for Fast and Improved LLM Inference](https://arxiv.org/abs/2409.01227) <br> Barys Liskavets, Maxim Ushakov, Shuvendu Roy, Mark Klibanov, Ali Etemad, Shane Luke |<img width="1002" alt="image" src="https://arxiv.org/html/2409.01227v3/x1.png"> |[Github](https://github.com/Workday/cpc) <br> [Paper](https://arxiv.org/abs/2409.01227)|[//]: #12/30
+| [Task-agnostic Prompt Compression with Context-aware Sentence Embedding and Reward-guided Task Descriptor](https://arxiv.org/abs/2502.13374v1) <br> Barys Liskavets, Shuvendu Roy, Maxim Ushakov, Mark Klibanov, Ali Etemad, Shane Luke |<img width="1002" alt="image" src="https://arxiv.org/html/2502.13374v1/x2.png"> | [Paper](https://arxiv.org/abs/2502.13374v1)|[//]: #12/30
\ No newline at end of file