ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.19249
  4. Cited By
Preserving Pre-trained Features Helps Calibrate Fine-tuned Language
  Models

Preserving Pre-trained Features Helps Calibrate Fine-tuned Language Models

International Conference on Learning Representations (ICLR), 2023
30 May 2023
Guande He
Jianfei Chen
Jun Zhu
ArXiv (abs)PDFHTML

Papers citing "Preserving Pre-trained Features Helps Calibrate Fine-tuned Language Models"

13 / 13 papers shown
Title
From Calibration to Collaboration: LLM Uncertainty Quantification Should Be More Human-Centered
From Calibration to Collaboration: LLM Uncertainty Quantification Should Be More Human-Centered
Siddartha Devic
Tejas Srinivasan
Jesse Thomason
Willie Neiswanger
Willie Neiswanger
183
7
0
09 Jun 2025
Comparing Uncertainty Measurement and Mitigation Methods for Large Language Models: A Systematic Review
Comparing Uncertainty Measurement and Mitigation Methods for Large Language Models: A Systematic Review
Toghrul Abbasli
Kentaroh Toyoda
Yuan Wang
Leon Witt
Muhammad Asif Ali
Yukai Miao
Dan Li
Qingsong Wei
UQCV
587
2
0
25 Apr 2025
Minimal Ranks, Maximum Confidence: Parameter-efficient Uncertainty Quantification for LoRA
Minimal Ranks, Maximum Confidence: Parameter-efficient Uncertainty Quantification for LoRA
Patryk Marszałek
Klaudia Bałazy
Jacek Tabor
Tomasz Kuśmierczyk
UQCV
152
1
0
17 Feb 2025
Uncertainty-Aware Adaptation of Large Language Models for Protein-Protein Interaction Analysis
Uncertainty-Aware Adaptation of Large Language Models for Protein-Protein Interaction Analysis
Sanket Jantre
Tianle Wang
Gilchan Park
Kriti Chopra
Nicholas Jeon
Xiaoning Qian
Nathan M. Urban
Byung-Jun Yoon
447
1
0
10 Feb 2025
DiTASK: Multi-Task Fine-Tuning with Diffeomorphic Transformations
DiTASK: Multi-Task Fine-Tuning with Diffeomorphic TransformationsComputer Vision and Pattern Recognition (CVPR), 2025
Krishna Sri Ipsit Mantri
Carola-Bibiane Schönlieb
Bruno Ribeiro
Chaim Baskin
Moshe Eliasof
419
5
0
09 Feb 2025
The Best Instruction-Tuning Data are Those That Fit
The Best Instruction-Tuning Data are Those That Fit
Dylan Zhang
Qirun Dai
Hao Peng
ALM
518
20
0
06 Feb 2025
Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs
Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs
Ruijia Niu
D. Wu
Rose Yu
Yi-An Ma
468
2
0
09 Oct 2024
Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment
Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment
Sangwon Yu
Jongyoon Song
Bongkyu Hwang
Hoyoung Kang
Sooah Cho
Junhwa Choi
Seongho Joe
Taehee Lee
Youngjune Gwon
Sungroh Yoon
538
9
0
31 Jul 2024
LoRA Dropout as a Sparsity Regularizer for Overfitting Control
LoRA Dropout as a Sparsity Regularizer for Overfitting Control
Yang Lin
Xinyu Ma
Xu Chu
Yujie Jin
Zhibang Yang
Yasha Wang
Hong-yan Mei
182
44
0
15 Apr 2024
Uncertainty quantification in fine-tuned LLMs using LoRA ensembles
Uncertainty quantification in fine-tuned LLMs using LoRA ensembles
Oleksandr Balabanov
Hampus Linander
UQCV
459
26
0
19 Feb 2024
LoRA ensembles for large language model fine-tuning
LoRA ensembles for large language model fine-tuning
Xi Wang
Laurence Aitchison
Maja Rudolph
UQCV
414
53
0
29 Sep 2023
CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain
  Performance and Calibration
CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain Performance and CalibrationConference of the European Chapter of the Association for Computational Linguistics (EACL), 2023
Rachneet Sachdeva
Martin Tutek
Iryna Gurevych
OODD
240
16
0
14 Sep 2023
Bayesian Low-rank Adaptation for Large Language Models
Bayesian Low-rank Adaptation for Large Language ModelsInternational Conference on Learning Representations (ICLR), 2023
Adam X. Yang
Maxime Robeyns
Xi Wang
Laurence Aitchison
AI4CEBDL
659
82
0
24 Aug 2023
1