ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.17765
  4. Cited By
Joker: Joint Optimization Framework for Lightweight Kernel Machines

Joker: Joint Optimization Framework for Lightweight Kernel Machines

23 May 2025
Junhong Zhang
Zhihui Lai
ArXivPDFHTML

Papers citing "Joker: Joint Optimization Framework for Lightweight Kernel Machines"

20 / 20 papers shown
Title
Fast training of large kernel models with delayed projections
Fast training of large kernel models with delayed projections
Amirhesam Abedsoltan
Siyuan Ma
Parthe Pandit
Mikhail Belkin
92
1
0
25 Nov 2024
Stochastic Gradient Descent for Gaussian Processes Done Right
Stochastic Gradient Descent for Gaussian Processes Done Right
J. Lin
Shreyas Padhy
Javier Antorán
Austin Tripp
Alexander Terenin
Csaba Szepesvári
José Miguel Hernández-Lobato
David Janz
34
9
0
31 Oct 2023
Error Bounds for Learning with Vector-Valued Random Features
Error Bounds for Learning with Vector-Valued Random Features
S. Lanthaler
Nicholas H. Nelsen
52
14
0
26 May 2023
Toward Large Kernel Models
Toward Large Kernel Models
Amirhesam Abedsoltan
M. Belkin
Parthe Pandit
71
17
0
06 Feb 2023
Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS
Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS
Lin Chen
Sheng Xu
112
94
0
22 Sep 2020
On the Similarity between the Laplace and Neural Tangent Kernels
On the Similarity between the Laplace and Neural Tangent Kernels
Amnon Geifman
A. Yadav
Yoni Kasten
Meirav Galun
David Jacobs
Ronen Basri
99
93
0
03 Jul 2020
Kernel methods through the roof: handling billions of points efficiently
Kernel methods through the roof: handling billions of points efficiently
Giacomo Meanti
Luigi Carratino
Lorenzo Rosasco
Alessandro Rudi
61
115
0
18 Jun 2020
Globally Convergent Newton Methods for Ill-conditioned Generalized
  Self-concordant Losses
Globally Convergent Newton Methods for Ill-conditioned Generalized Self-concordant Losses
Ulysse Marteau-Ferey
Francis R. Bach
Alessandro Rudi
45
36
0
03 Jul 2019
On Fast Leverage Score Sampling and Optimal Learning
On Fast Leverage Score Sampling and Optimal Learning
Alessandro Rudi
Daniele Calandriello
Luigi Carratino
Lorenzo Rosasco
43
81
0
31 Oct 2018
GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU
  Acceleration
GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration
Jacob R. Gardner
Geoff Pleiss
D. Bindel
Kilian Q. Weinberger
A. Wilson
GP
77
1,088
0
28 Sep 2018
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
206
3,160
0
20 Jun 2018
FALKON: An Optimal Large Scale Kernel Method
FALKON: An Optimal Large Scale Kernel Method
Alessandro Rudi
Luigi Carratino
Lorenzo Rosasco
44
196
0
31 May 2017
Diving into the shallows: a computational perspective on large-scale
  shallow learning
Diving into the shallows: a computational perspective on large-scale shallow learning
Siyuan Ma
M. Belkin
53
78
0
30 Mar 2017
GPflow: A Gaussian process library using TensorFlow
GPflow: A Gaussian process library using TensorFlow
A. G. Matthews
Mark van der Wilk
T. Nickson
Keisuke Fujii
A. Boukouvalas
Pablo León-Villagrá
Zoubin Ghahramani
J. Hensman
GP
63
662
0
27 Oct 2016
Large Scale Kernel Learning using Block Coordinate Descent
Large Scale Kernel Learning using Block Coordinate Descent
Stephen Tu
Rebecca Roelofs
Shivaram Venkataraman
Benjamin Recht
38
42
0
17 Feb 2016
Coordinate Descent Converges Faster with the Gauss-Southwell Rule Than
  Random Selection
Coordinate Descent Converges Faster with the Gauss-Southwell Rule Than Random Selection
J. Nutini
Mark Schmidt
I. Laradji
M. Friedlander
H. Koepke
66
222
0
01 Jun 2015
SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization
SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization
Zheng Qu
Peter Richtárik
Martin Takáč
Olivier Fercoq
ODL
72
99
0
08 Feb 2015
Scalable Kernel Methods via Doubly Stochastic Gradients
Scalable Kernel Methods via Doubly Stochastic Gradients
Bo Dai
Bo Xie
Niao He
Yingyu Liang
Anant Raj
Maria-Florina Balcan
Le Song
109
230
0
21 Jul 2014
Stochastic Dual Coordinate Ascent Methods for Regularized Loss
  Minimization
Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
Shai Shalev-Shwartz
Tong Zhang
121
1,031
0
10 Sep 2012
Iteration Complexity of Randomized Block-Coordinate Descent Methods for
  Minimizing a Composite Function
Iteration Complexity of Randomized Block-Coordinate Descent Methods for Minimizing a Composite Function
Peter Richtárik
Martin Takáč
86
769
0
14 Jul 2011
1