ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.18071
  4. Cited By
Can Perplexity Predict Fine-Tuning Performance? An Investigation of
  Tokenization Effects on Sequential Language Models for Nepali

Can Perplexity Predict Fine-Tuning Performance? An Investigation of Tokenization Effects on Sequential Language Models for Nepali

28 April 2024
Nishant Luitel
Nirajan Bekoju
Anand Kumar Sah
Subarna Shakya
ArXivPDFHTML

Papers citing "Can Perplexity Predict Fine-Tuning Performance? An Investigation of Tokenization Effects on Sequential Language Models for Nepali"

1 / 1 papers shown
Title
Exploring Tokenization Strategies and Vocabulary Sizes for Enhanced
  Arabic Language Models
Exploring Tokenization Strategies and Vocabulary Sizes for Enhanced Arabic Language Models
M. Alrefaie
Nour Eldin Morsy
Nada Samir
23
6
0
17 Mar 2024
1