ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.04280
88
3

HumanEdit: A High-Quality Human-Rewarded Dataset for Instruction-based Image Editing

5 December 2024
Jinbin Bai
Wei Chow
L. Yang
Xiangtai Li
Juncheng Billy Li
H. Zhang
Shuicheng Yan
ArXivPDFHTML
Abstract

We present HumanEdit, a high-quality, human-rewarded dataset specifically designed for instruction-guided image editing, enabling precise and diverse image manipulations through open-form language instructions. Previous large-scale editing datasets often incorporate minimal human feedback, leading to challenges in aligning datasets with human preferences. HumanEdit bridges this gap by employing human annotators to construct data pairs and administrators to provide feedback. With meticulously curation, HumanEdit comprises 5,751 images and requires more than 2,500 hours of human effort across four stages, ensuring both accuracy and reliability for a wide range of image editing tasks. The dataset includes six distinct types of editing instructions: Action, Add, Counting, Relation, Remove, and Replace, encompassing a broad spectrum of real-world scenarios. All images in the dataset are accompanied by masks, and for a subset of the data, we ensure that the instructions are sufficiently detailed to support mask-free editing. Furthermore, HumanEdit offers comprehensive diversity and high-resolution 1024×10241024 \times 10241024×1024 content sourced from various domains, setting a new versatile benchmark for instructional image editing datasets. With the aim of advancing future research and establishing evaluation benchmarks in the field of image editing, we release HumanEdit atthis https URL.

View on arXiv
@article{bai2025_2412.04280,
  title={ HumanEdit: A High-Quality Human-Rewarded Dataset for Instruction-based Image Editing },
  author={ Jinbin Bai and Wei Chow and Ling Yang and Xiangtai Li and Juncheng Li and Hanwang Zhang and Shuicheng Yan },
  journal={arXiv preprint arXiv:2412.04280},
  year={ 2025 }
}
Comments on this paper