ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.04173
  4. Cited By
Quantifying Prediction Consistency Under Model Multiplicity in Tabular
  LLMs

Quantifying Prediction Consistency Under Model Multiplicity in Tabular LLMs

4 July 2024
Faisal Hamman
Pasan Dissanayake
Saumitra Mishra
Freddy Lecue
Sanghamitra Dutta
    AAML
ArXivPDFHTML

Papers citing "Quantifying Prediction Consistency Under Model Multiplicity in Tabular LLMs"

9 / 9 papers shown
Title
An overview of model uncertainty and variability in LLM-based sentiment analysis. Challenges, mitigation strategies and the role of explainability
An overview of model uncertainty and variability in LLM-based sentiment analysis. Challenges, mitigation strategies and the role of explainability
David Herrera-Poyatos
Carlos Peláez-González
Cristina Zuheros
Andrés Herrera-Poyatos
Virilo Tejedor
F. Herrera
Rosana Montes
23
1
0
06 Apr 2025
Automated Consistency Analysis of LLMs
Automated Consistency Analysis of LLMs
Aditya Patwardhan
Vivek Vaidya
Ashish Kundu
50
0
0
10 Feb 2025
The Curious Case of Arbitrariness in Machine Learning
Prakhar Ganesh
Afaf Taik
G. Farnadi
54
2
0
28 Jan 2025
Unleashing the Potential of Large Language Models for Predictive Tabular
  Tasks in Data Science
Unleashing the Potential of Large Language Models for Predictive Tabular Tasks in Data Science
Yazheng Yang
Yuqi Wang
Sankalok Sen
Lei Li
Qi Liu
LMTD
41
9
0
29 Mar 2024
From Supervised to Generative: A Novel Paradigm for Tabular Deep
  Learning with Large Language Models
From Supervised to Generative: A Novel Paradigm for Tabular Deep Learning with Large Language Models
Xumeng Wen
Han Zhang
Shun Zheng
Wei Xu
Jiang Bian
LMTD
ALM
62
20
0
11 Oct 2023
Exploring the Whole Rashomon Set of Sparse Decision Trees
Exploring the Whole Rashomon Set of Sparse Decision Trees
Rui Xin
Chudi Zhong
Zhi Chen
Takuya Takagi
Margo Seltzer
Cynthia Rudin
25
53
0
16 Sep 2022
Can Foundation Models Wrangle Your Data?
Can Foundation Models Wrangle Your Data?
A. Narayan
Ines Chami
Laurel J. Orr
Simran Arora
Christopher Ré
LMTD
AI4CE
166
212
0
20 May 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
203
1,651
0
15 Oct 2021
Consistent Counterfactuals for Deep Models
Consistent Counterfactuals for Deep Models
Emily Black
Zifan Wang
Matt Fredrikson
Anupam Datta
BDL
OffRL
OOD
36
43
0
06 Oct 2021
1