ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.18164
18
1

Data-Prep-Kit: getting your data ready for LLM application development

26 September 2024
David Wood
Boris Lublinsky
Alexy Roytman
Shivdeep Singh
Abdulhamid A. Adebayo
Revital Eres
Mohammad Nassar
Hima Patel
Yousaf Shah
C. Adam
Petros Zerfos
Nirmit Desai
Daiki Tsuzuku
Takuya Goto
Michele Dolfi
Saptha Surendran
Paramesvaran Selvam
Sungeun An
Yuan Chi Chang
Dhiraj Joshi
Hajar Emami-Gohari
Xuan-Hong Dang
Yan Koyfman
Shahrokh Daijavad
    VLM
ArXivPDFHTML
Abstract

Data preparation is the first and a very important step towards any Large Language Model (LLM) development. This paper introduces an easy-to-use, extensible, and scale-flexible open-source data preparation toolkit called Data Prep Kit (DPK). DPK is architected and designed to enable users to scale their data preparation to their needs. With DPK they can prepare data on a local machine or effortlessly scale to run on a cluster with thousands of CPU Cores. DPK comes with a highly scalable, yet extensible set of modules that transform natural language and code data. If the user needs additional transforms, they can be easily developed using extensive DPK support for transform creation. These modules can be used independently or pipelined to perform a series of operations. In this paper, we describe DPK architecture and show its performance from a small scale to a very large number of CPUs. The modules from DPK have been used for the preparation of Granite Models [1] [2]. We believe DPK is a valuable contribution to the AI community to easily prepare data to enhance the performance of their LLM models or to fine-tune models with Retrieval-Augmented Generation (RAG).

View on arXiv
Comments on this paper