ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.08500
  4. Cited By
InTune: Reinforcement Learning-based Data Pipeline Optimization for Deep
  Recommendation Models

InTune: Reinforcement Learning-based Data Pipeline Optimization for Deep Recommendation Models

13 August 2023
Kabir Nagrecha
Lingyi Liu
P. Delgado
Prasanna Padmanabhan
    OffRL
    AI4CE
ArXivPDFHTML

Papers citing "InTune: Reinforcement Learning-based Data Pipeline Optimization for Deep Recommendation Models"

9 / 9 papers shown
Title
PreSto: An In-Storage Data Preprocessing System for Training
  Recommendation Models
PreSto: An In-Storage Data Preprocessing System for Training Recommendation Models
Yunjae Lee
Hyeseong Kim
Minsoo Rhu
24
3
0
11 Jun 2024
Towards a Systems Theory of Algorithms
Towards a Systems Theory of Algorithms
Florian Dorfler
Zhiyu He
Giuseppe Belgioioso
S. Bolognani
John Lygeros
Michael Muehlebach
AI4CE
25
10
0
25 Jan 2024
Saturn: An Optimized Data System for Large Model Deep Learning Workloads
Saturn: An Optimized Data System for Large Model Deep Learning Workloads
Kabir Nagrecha
Arun Kumar
11
6
0
03 Sep 2023
RecShard: Statistical Feature-Based Memory Optimization for
  Industry-Scale Neural Recommendation
RecShard: Statistical Feature-Based Memory Optimization for Industry-Scale Neural Recommendation
Geet Sethi
Bilge Acun
Niket Agarwal
Christos Kozyrakis
Caroline Trippel
Carole-Jean Wu
36
65
0
25 Jan 2022
Hydra: A System for Large Multi-Model Deep Learning
Hydra: A System for Large Multi-Model Deep Learning
Kabir Nagrecha
Arun Kumar
MoE
AI4CE
22
5
0
16 Oct 2021
Model-Parallel Model Selection for Deep Learning Systems
Model-Parallel Model Selection for Deep Learning Systems
Kabir Nagrecha
29
16
0
14 Jul 2021
tf.data: A Machine Learning Data Processing Framework
tf.data: A Machine Learning Data Processing Framework
D. Murray
Jiří Šimša
Ana Klimovic
Ihor Indyk
PINN
AI4CE
LMTD
39
86
0
28 Jan 2021
ZeRO-Offload: Democratizing Billion-Scale Model Training
ZeRO-Offload: Democratizing Billion-Scale Model Training
Jie Ren
Samyam Rajbhandari
Reza Yazdani Aminabadi
Olatunji Ruwase
Shuangyang Yang
Minjia Zhang
Dong Li
Yuxiong He
MoE
160
399
0
18 Jan 2021
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
243
1,791
0
17 Sep 2019
1