Home
People
Events
Research
Publications
Contact
News
VLA
VLA-InfoEntropy: A Training-Free Vision-Attention Information Entropy Approach for Vision-Language-Action Models Inference Acceleration and Success
Chuhang Liu
,
Yayun He
,
Zuheng Kang
,
Xiaoyang Qu
,
Jianzong Wang
Cite
Vision-Language-Action Models for Embodied Intelligence: Technological Review and Future Outlook
Chuyang Liu
,
Zuheng Kang
,
Botao Zhao
,
Xiaoyang Qu
,
Jianzong Wang
,
Xiaojun Ni
,
Hui Tian
,
Yayun He
Cite
面向具身智能的视觉-语言-动作模型技术回顾与展望综述
Chuyang Liu
,
Zuheng Kang
,
Botao Zhao
,
Xiaoyang Qu
,
Jianzong Wang
,
Xiaojun Ni
,
Hui Tian
,
Yayun He
Last updated on Mar 4, 2026
Cite
From Knowing to Doing Precisely: A General Self-Correction and Termination Framework for VLA Models
While vision-language-action (VLA) models for embodied agents integrate perception, reasoning, and control, they remain constrained by …
Wentao Zhang
,
Aolan Sun
,
Wentao Mo
,
Xiaoyang Qu
,
Yuxin Zheng
,
Jianzong Wang
Cite
arXiv
Cite
×