Scaling Gaussian Processes for Learning Curve Prediction via Latent Kronecker Structure

A key task in AutoML is to model learning curves of machine learning models jointly as a function of model hyper-parameters and training progression. While Gaussian processes (GPs) are suitable for this task, na\"ive GPs require time and space for hyper-parameter configurations and learning curve observations per hyper-parameter. Efficient inference via Kronecker structure is typically incompatible with early-stopping due to missing learning curve values. We impose to leverage efficient product kernels while handling missing values. In particular, we interpret the joint covariance matrix of observed values as the projection of a latent Kronecker product. Combined with iterative linear solvers and structured matrix-vector multiplication, our method only requires time and space. We show that our GP model can match the performance of a Transformer on a learning curve prediction task.
View on arXiv