File size: 2,120 Bytes
1d1ed06 8e2b6a8 d74ac11 8e2b6a8 1d1ed06 8e2b6a8 1d1ed06 36fc391 1d1ed06 ffa51ee |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
---
license: mit
---
<p align="center">
<img src="logo (1).png" width="25%"/>
</p>
# **Fine-Grained Preference Optimization Improves Spatial Reasoning in VLMs**

______________________________________________________________________
## 💡 Introduction
<div align="center">
<a href="https://plan-lab.github.io/projects/spatialreasoner/"><img src="https://img.shields.io/badge/Project-Page-blue?style=for-the-badge&logo=googlechrome&logoColor=white"></a>
<a href="https://arxiv.org/pdf/2506.21656"><img src="https://img.shields.io/badge/Arxiv-2506.21656-red?style=for-the-badge&logo=arxiv&logoColor=white"></a>
<a href="https://github.com/PLAN-Lab/SpatialReasonerR1"><img src="https://img.shields.io/badge/GitHub-Repo-181717?style=for-the-badge&logo=github&logoColor=white"></a>
</div>
**Yifan Shen, Yuanzhe Liu, Jingyuan Zhu, Xu Cao, Xiaofeng Zhang, Yixiao He, Wenming Ye, James Matthew Rehg, Ismini Lourentzou**
Current Vision-Language Models (VLMs) struggle with fine-grained spatial reasoning, particularly when multi-step logic and precise spatial alignment are required. In this work, we introduce SpatialReasoner-R1, a novel VLM designed to address these limitations. First, we propose Multi-LLM Guided Monte Carlo Tree Search (M3CTS) and Fine-Grained Spatial Rewards methods to construct a high-quality dataset. Second, we use fine-grained Direct Preference Optimization (fDPO) to train our model. fDPO introduces segment-specific preference granularity for descriptive grounding and logical reasoning, achieving an average improvement of 4.1% over standard DPO across spatial quality tasks, and a 9.0% boost in spatial quantity tasks. To address the scarcity of multi-step spatial reasoning data, M3CTS enables collaborative exploration of diverse reasoning paths, significantly enriching spatial comprehension and logical coherence. Empirical evaluations demonstrate that SpatialReasoner-R1 sets a new state-of-the-art on SpatialRGPT-Bench, outperforming the strongest baseline by 9.4% in average accuracy, while maintaining competitive performance on general vision-language tasks. |