10
0

WorldPM: Scaling Human Preference Modeling

Abstract

Motivated by scaling laws in language modeling that demonstrate how test loss scales as a power law with model and dataset sizes, we find that similar laws exist in preference modeling. We propose World Preference Modeling(WorldPM)toemphasizethisscalingpotential,whereWorldPreferenceembodiesaunifiedrepresentationofhumanpreferences.Inthispaper,wecollectpreferencedatafrompublicforumscoveringdiverseusercommunities,andconductextensivetrainingusing15Mscaledataacrossmodelsrangingfrom1.5Bto72Bparameters.Weobservedistinctpatternsacrossdifferentevaluationmetrics:(1)Adversarialmetrics(abilitytoidentifydeceptivefeatures)consistentlyscaleupwithincreasedtrainingdataandbasemodelsize;(2)Objectivemetrics(objectiveknowledgewithwelldefinedanswers)showemergentbehaviorinlargerlanguagemodels,highlightingWorldPMsscalabilitypotential;(3)Subjectivemetrics(subjectivepreferencesfromalimitednumberofhumansorAI)donotdemonstratescalingtrends.FurtherexperimentsvalidatetheeffectivenessofWorldPMasafoundationforpreferencefinetuning.Throughevaluationson7benchmarkswith20subtasks,wefindthatWorldPMbroadlyimprovesthegeneralizationperformanceacrosshumanpreferencedatasetsofvaryingsizes(7K,100Kand800Ksamples),withperformancegainsexceeding5 (WorldPM) to emphasize this scaling potential, where World Preference embodies a unified representation of human preferences. In this paper, we collect preference data from public forums covering diverse user communities, and conduct extensive training using 15M-scale data across models ranging from 1.5B to 72B parameters. We observe distinct patterns across different evaluation metrics: (1) Adversarial metrics (ability to identify deceptive features) consistently scale up with increased training data and base model size; (2) Objective metrics (objective knowledge with well-defined answers) show emergent behavior in larger language models, highlighting WorldPM's scalability potential; (3) Subjective metrics (subjective preferences from a limited number of humans or AI) do not demonstrate scaling trends. Further experiments validate the effectiveness of WorldPM as a foundation for preference fine-tuning. Through evaluations on 7 benchmarks with 20 subtasks, we find that WorldPM broadly improves the generalization performance across human preference datasets of varying sizes (7K, 100K and 800K samples), with performance gains exceeding 5% on many key subtasks. Integrating WorldPM into our internal RLHF pipeline, we observe significant improvements on both in-house and public evaluation sets, with notable gains of 4% to 8% in our in-house evaluations.

View on arXiv
@article{wang2025_2505.10527,
  title={ WorldPM: Scaling Human Preference Modeling },
  author={ Binghai Wang and Runji Lin and Keming Lu and Le Yu and Zhenru Zhang and Fei Huang and Chujie Zheng and Kai Dang and Yang Fan and Xingzhang Ren and An Yang and Binyuan Hui and Dayiheng Liu and Tao Gui and Qi Zhang and Xuanjing Huang and Yu-Gang Jiang and Bowen Yu and Jingren Zhou and Junyang Lin },
  journal={arXiv preprint arXiv:2505.10527},
  year={ 2025 }
}
Comments on this paper