ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.11939
18
0
v1v2 (latest)

Testing Cross-Validation Variants in Ranking Environments

25 May 2021
B. Sziklai
Máté Baranyi
Károly Héberger
ArXiv (abs)PDFHTML
Abstract

This research investigates how to determine whether two rankings can come from the same distribution. We evaluate three hybrid tests: Wilcoxon's, Dietterich's, and Alpaydin's statistical tests combined with cross-validation, each operating with folds ranging from 5 to 10, thus altogether 18 variants. We have used the framework of a popular comparative statistical test, the Sum of Ranking Differences, but our results are representative of all ranking environments. To compare these methods, we have followed an innovative approach borrowed from Economics. We designed eight scenarios for testing type I and II errors. These represent typical situations (i.e., different data structures) that cross-validation (CV) tests face routinely. The optimal CV method depends on the preferences regarding the minimization of type I/II errors, size of the input, and expected patterns in the data. The Wilcoxon method with eight folds proved to be the best under all three investigated input sizes, although there were scenarios and decision aspects where other methods, namely Wilcoxon~10 and Alpaydin~10, performed better.

View on arXiv
Comments on this paper