Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2410.15037
Cited By
mHumanEval -- A Multilingual Benchmark to Evaluate Large Language Models for Code Generation
28 January 2025
Nishat Raihan
Antonios Anastasopoulos
Marcos Zampieri
ELM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"mHumanEval -- A Multilingual Benchmark to Evaluate Large Language Models for Code Generation"
5 / 5 papers shown
Title
How Accurately Do Large Language Models Understand Code?
Sabaat Haroon
Ahmad Faraz Khan
Ahmad Humayun
Waris Gill
Abdul Haddi Amjad
A. R. Butt
Mohammad Taha Khan
Muhammad Ali Gulzar
ELM
LRM
28
0
0
06 Apr 2025
Aligning Multimodal LLM with Human Preference: A Survey
Tao Yu
Y. Zhang
Chaoyou Fu
Junkang Wu
Jinda Lu
...
Qingsong Wen
Z. Zhang
Yan Huang
Liang Wang
T. Tan
73
2
0
18 Mar 2025
TigerLLM -- A Family of Bangla Large Language Models
Nishat Raihan
Marcos Zampieri
40
0
0
14 Mar 2025
Code LLMs: A Taxonomy-based Survey
Nishat Raihan
Christian D. Newman
Marcos Zampieri
91
1
0
11 Dec 2024
MojoBench: Language Modeling and Benchmarks for Mojo
Nishat Raihan
Joanna C. S. Santos
Marcos Zampieri
29
2
0
23 Oct 2024
1