ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.06561
28
6
v1v2v3 (latest)

AutoML from Service Provider's Perspective: Multi-device, Multi-tenant Model Selection with GP-EI

17 March 2018
Chen Yu
Bojan Karlas
Jie Zhong
Ce Zhang
Ji Liu
ArXiv (abs)PDFHTML
Abstract

AutoML has become a popular service that is provided by most leading cloud service providers today. In this paper, we focus on the AutoML problem from the \emph{service provider's perspective}, motivated by the following practical consideration: When an AutoML service needs to serve {\em multiple users} with {\em multiple devices} at the same time, how can we allocate these devices to users in an efficient way? We focus on GP-EI, one of the most popular algorithms for automatic model selection and hyperparameter tuning, used by systems such as Google Vizer. The technical contribution of this paper is the first multi-device, multi-tenant algorithm for GP-EI that is aware of \emph{multiple} computation devices and multiple users sharing the same set of computation devices. Theoretically, given NNN users and MMM devices, we obtain a regret bound of O((MIU(T,K)+M)N2M)O((\text{\bf {MIU}}(T,K) + M)\frac{N^2}{M})O((MIU(T,K)+M)MN2​), where MIU(T,K)\text{\bf {MIU}}(T,K)MIU(T,K) refers to the maximal incremental uncertainty up to time TTT for the covariance matrix KKK. Empirically, we evaluate our algorithm on two applications of automatic model selection, and show that our algorithm significantly outperforms the strategy of serving users independently. Moreover, when multiple computation devices are available, we achieve near-linear speedup when the number of users is much larger than the number of devices.

View on arXiv
Comments on this paper