ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.15037
  4. Cited By
mHumanEval -- A Multilingual Benchmark to Evaluate Large Language Models for Code Generation

mHumanEval -- A Multilingual Benchmark to Evaluate Large Language Models for Code Generation

28 January 2025
Nishat Raihan
Antonios Anastasopoulos
Marcos Zampieri
    ELM
ArXivPDFHTML

Papers citing "mHumanEval -- A Multilingual Benchmark to Evaluate Large Language Models for Code Generation"

5 / 5 papers shown
Title
How Accurately Do Large Language Models Understand Code?
How Accurately Do Large Language Models Understand Code?
Sabaat Haroon
Ahmad Faraz Khan
Ahmad Humayun
Waris Gill
Abdul Haddi Amjad
A. R. Butt
Mohammad Taha Khan
Muhammad Ali Gulzar
ELM
LRM
28
0
0
06 Apr 2025
Aligning Multimodal LLM with Human Preference: A Survey
Aligning Multimodal LLM with Human Preference: A Survey
Tao Yu
Y. Zhang
Chaoyou Fu
Junkang Wu
Jinda Lu
...
Qingsong Wen
Z. Zhang
Yan Huang
Liang Wang
T. Tan
73
2
0
18 Mar 2025
TigerLLM -- A Family of Bangla Large Language Models
Nishat Raihan
Marcos Zampieri
40
0
0
14 Mar 2025
Code LLMs: A Taxonomy-based Survey
Code LLMs: A Taxonomy-based Survey
Nishat Raihan
Christian D. Newman
Marcos Zampieri
91
1
0
11 Dec 2024
MojoBench: Language Modeling and Benchmarks for Mojo
MojoBench: Language Modeling and Benchmarks for Mojo
Nishat Raihan
Joanna C. S. Santos
Marcos Zampieri
29
2
0
23 Oct 2024
1