Skip to content
Snippets Groups Projects
user avatar
Blank-z0 authored
Add paper "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Context Sparsification"
Dynamic-LLaVA is the first MLLM acceleration framework that simultaneously sparsifies both vision and language contexts while integrating inference efficiency optimization across different MLLM inference modes into a unified framework. In practice, Dynamic-LLaVA can achieve additional inference efficiency throughout the entire generation process, with negligible understanding and generation ability degradation or even performance gains compared to the full-context inference baselines.
GitHub: https://github.com/Osilly/dynamic_llava
4cb87630
History
Name Last commit Last update