From Knowing to Doing Precisely: A General Self-Correction and Termination Framework for VLA Models

An overview of the VLA-SCT framework

Abstract

While vision-language-action (VLA) models for embodied agents integrate perception, reasoning, and control, they remain constrained by two critical weaknesses. first, during grasping tasks, the action tokens generated by the language model often exhibit subtle spatial deviations from the target object, resulting in grasp failures; second, they lack the ability to reliably recognize task completion, which leads to redundant actions and frequent timeout errors. To address these challenges and enhance robustness, we propose a lightweight, training-free framework, VLA-SCT. This framework operates as a self-correcting control loop, combining data-driven action refinement with conditional logic for termination. Consequently, compared to baseline approaches, our method achieves consistent improvements across all datasets in the LIBERO benchmark, significantly increasing the success rate of fine manipulation tasks and ensuring accurate task completion, thereby promoting the deployment of more reliable VLA agents in complex, unstructured environments.

Type
Publication
In 2026 IEEE International Conference on Acoustics, Speech, and Signal Processing
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
Aolan Sun
Aolan Sun
Researcher