Are We Really Measuring Progress? Transferring Insights from Evaluating Recommender Systems to Temporal Link Prediction
Recent work has questioned the reliability of graph learning benchmarks, citing concerns around task design, methodological rigor, and data suitability. In this extended abstract, we contribute to this discussion by focusing on evaluation strategies in Temporal Link Prediction (TLP). We observe that current evaluation protocols are often affected by one or more of the following issues: (1) inconsistent sampled metrics, (2) reliance on hard negative sampling often introduced as a means to improve robustness, and (3) metrics that implicitly assume equal base probabilities across source nodes by combining predictions. We support these claims through illustrative examples and connections to longstanding concerns in the recommender systems community. Our ongoing work aims to systematically characterize these problems and explore alternatives that can lead to more robust and interpretable evaluation. We conclude with a discussion of potential directions for improving the reliability of TLP benchmarks.
View on arXiv@article{cornell2025_2506.12588, title={ Are We Really Measuring Progress? Transferring Insights from Evaluating Recommender Systems to Temporal Link Prediction }, author={ Filip Cornell and Oleg Smirnov and Gabriela Zarzar Gandler and Lele Cao }, journal={arXiv preprint arXiv:2506.12588}, year={ 2025 } }