Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.12739
Cited By
Self-Distillation for Model Stacking Unlocks Cross-Lingual NLU in 200+ Languages
18 June 2024
Fabian David Schmidt
Philipp Borchert
Ivan Vulić
Goran Glavaš
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Self-Distillation for Model Stacking Unlocks Cross-Lingual NLU in 200+ Languages"
5 / 5 papers shown
Title
How Much Do LLMs Hallucinate across Languages? On Multilingual Estimation of LLM Hallucination in the Wild
Saad Obaid ul Islam
Anne Lauscher
Goran Glavas
HILM
LRM
112
1
0
21 Feb 2025
Language Fusion for Parameter-Efficient Cross-lingual Transfer
Philipp Borchert
Ivan Vulić
Marie-Francine Moens
Jochen De Weerdt
36
0
0
12 Jan 2025
LLMs are Also Effective Embedding Models: An In-depth Overview
Chongyang Tao
Tao Shen
Shen Gao
Junshuo Zhang
Zhen Li
Zhengwei Tao
Shuai Ma
68
7
0
17 Dec 2024
Analyzing and Adapting Large Language Models for Few-Shot Multilingual NLU: Are We There Yet?
E. Razumovskaia
Ivan Vulić
Anna Korhonen
29
5
0
04 Mar 2024
Should we Stop Training More Monolingual Models, and Simply Use Machine Translation Instead?
T. Isbister
F. Carlsson
Magnus Sahlgren
40
24
0
21 Apr 2021
1