ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.17092
50
0

Shakti-VLMs: Scalable Vision-Language Models for Enterprise AI

24 February 2025
Syed Abdul Gaffar Shakhadri
Kruthika KR
Kartik Basavaraj Angadi
    VLM
ArXivPDFHTML
Abstract

We introduce Shakti VLM, a family of vision-language models in the capacity of 1B and 4B parameters designed to address data efficiency challenges in multimodal learning. While recent VLMs achieve strong performance through extensive training data, Shakti models leverage architectural innovations to attain competitive results with fewer tokens. Key advancements include QK-Normalization for attention stability, hybrid normalization techniques, and enhanced positional encoding. A three-stage training strategy further optimizes learning efficiency. Evaluations show that Shakti-Shakti-VLM-1B and Shakti-VLM-4B excel in document understanding, Visual Reasoning, OCR extraction, and general multimodal reasoning. Our results highlight that high performance can be achieved through model design and training strategy rather than sheer data volume, making Shakti an efficient solution for enterprise-scale multimodal tasks.

View on arXiv
@article{shakhadri2025_2502.17092,
  title={ Shakti-VLMs: Scalable Vision-Language Models for Enterprise AI },
  author={ Syed Abdul Gaffar Shakhadri and Kruthika KR and Kartik Basavaraj Angadi },
  journal={arXiv preprint arXiv:2502.17092},
  year={ 2025 }
}
Comments on this paper