ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.11201
  4. Cited By
Fine-Tuning or Fine-Failing? Debunking Performance Myths in Large
  Language Models
v1v2 (latest)

Fine-Tuning or Fine-Failing? Debunking Performance Myths in Large Language Models

17 June 2024
Scott Barnett
Zac Brannelly
Stefanus Kurniawan
Sheng Wong
    LRM
ArXiv (abs)PDFHTML

Papers citing "Fine-Tuning or Fine-Failing? Debunking Performance Myths in Large Language Models"

3 / 3 papers shown
In-Context Distillation with Self-Consistency Cascades: A Simple, Training-Free Way to Reduce LLM Agent Costs
In-Context Distillation with Self-Consistency Cascades: A Simple, Training-Free Way to Reduce LLM Agent Costs
Vishnu Sarukkai
Asanshay Gupta
James Hong
Michael Gharbi
Kayvon Fatahalian
73
0
0
02 Dec 2025
From Words to Wisdom: Discourse Annotation and Baseline Models for Student Dialogue Understanding
From Words to Wisdom: Discourse Annotation and Baseline Models for Student Dialogue Understanding
Farjana Sultana Mim
Shuchin Aeron
Eric Miller
Kristen Wendell
116
2
0
25 Nov 2025
Is Exchangeability better than I.I.D to handle Data Distribution Shifts while Pooling Data for Data-scarce Medical image segmentation?
Is Exchangeability better than I.I.D to handle Data Distribution Shifts while Pooling Data for Data-scarce Medical image segmentation?
Ayush Roy
Samin Enam
Jun Xia
Vishnu Suresh Lokhande
Won Hwa Kim
OOD
165
0
0
25 Jul 2025
1
Page 1 of 1