Blank-z0
authored
Add paper "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Context Sparsification" Dynamic-LLaVA is the first MLLM acceleration framework that simultaneously sparsifies both vision and language contexts while integrating inference efficiency optimization across different MLLM inference modes into a unified framework. In practice, Dynamic-LLaVA can achieve additional inference efficiency throughout the entire generation process, with negligible understanding and generation ability degradation or even performance gains compared to the full-context inference baselines. GitHub: https://github.com/Osilly/dynamic_llava
Name | Last commit | Last update |
---|