ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.05682
46
0

On the Suitability of Reinforcement Fine-Tuning to Visual Tasks

8 April 2025
X. Chen
Wei Li
Chunxu Liu
Chi Xie
Xiaoyan Hu
Chengqian Ma
Feng Zhu
Rui Zhao
    ReLM
    LRM
ArXivPDFHTML
Abstract

Reinforcement Fine-Tuning (RFT) is proved to be greatly valuable for enhancing the reasoning ability of LLMs. Researchers have been starting to apply RFT to MLLMs, hoping it will also enhance the capabilities of visual understanding. However, these works are at a very early stage and have not examined how suitable RFT actually is for visual tasks. In this work, we endeavor to understand the suitabilities and limitations of RFT for visual tasks, through experimental analysis and observations. We start by quantitative comparisons on various tasks, which shows RFT is generally better than SFT on visual tasks. %especially when the number of training samples are limited. To check whether such advantages are brought up by the reasoning process, we design a new reward that encourages the model to ``think'' more, whose results show more thinking can be beneficial for complicated tasks but harmful for simple tasks. We hope this study can provide more insight for the rapid advancements on this topic.

View on arXiv
@article{chen2025_2504.05682,
  title={ On the Suitability of Reinforcement Fine-Tuning to Visual Tasks },
  author={ Xiaxu Chen and Wei Li and Chunxu Liu and Chi Xie and Xiaoyan Hu and Chengqian Ma and Feng Zhu and Rui Zhao },
  journal={arXiv preprint arXiv:2504.05682},
  year={ 2025 }
}
Comments on this paper