330

Practitioner Motives to Select Hyperparameter Optimization Methods

ACM Transactions on Computer-Human Interaction (TOCHI), 2022
Main:25 Pages
11 Figures
Bibliography:5 Pages
4 Tables
Appendix:6 Pages
Abstract

Programmatic hyperparamter optimization (HPO) methods, such as Bayesian optimization and evolutionary algorithms, show high sampling efficiency in finding optimal hyperparameter configurations in development of machine learning (ML) models. Yet, practitioners often use less sample-efficient HPO methods, such as grid search, which often results in under-optimized ML models. As a reason for this behavior, we suspect practitioners choose HPO methods based on different motives. Practitioner motives, however, still need to be clarified to enhance user-centered development of HPO tools. To uncover practitioner motives to use different HPO methods, we conducted 20 semi-structured interviews and an online questionnaire with 49 ML experts. By presenting main goals (e.g., increase ML model understanding) and contextual factors affecting practitioners' selection of HPO methods (e.g., available computer resources), this study offers a conceptual foundation to better understand why practitioners use different HPO methods, supporting design of more user-centered and context-adaptive HPO tools in automated ML.

View on arXiv
Comments on this paper