ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.09550
  4. Cited By
Can Large Language Model Comprehend Ancient Chinese? A Preliminary Test
  on ACLUE

Can Large Language Model Comprehend Ancient Chinese? A Preliminary Test on ACLUE

14 October 2023
Yixuan Zhang
Jinyan Su
    LRMELM
ArXiv (abs)PDFHTML

Papers citing "Can Large Language Model Comprehend Ancient Chinese? A Preliminary Test on ACLUE"

3 / 3 papers shown
Title
Measuring Hong Kong Massive Multi-Task Language Understanding
Measuring Hong Kong Massive Multi-Task Language Understanding
Chuxue Cao
Zhenghao Zhu
Junqi Zhu
Guoying Lu
Siyu Peng
Juntao Dai
Weijie Shi
Sirui Han
Wenhan Luo
ELM
839
1
0
04 May 2025
Reference Trustable Decoding: A Training-Free Augmentation Paradigm for
  Large Language Models
Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language ModelsNeural Information Processing Systems (NeurIPS), 2024
Luohe Shi
Yao Yao
Zuchao Li
Lefei Zhang
Hai Zhao
204
0
0
30 Sep 2024
CMMLU: Measuring massive multitask language understanding in Chinese
CMMLU: Measuring massive multitask language understanding in ChineseAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Jinyan Su
Yixuan Zhang
Fajri Koto
Yifei Yang
Hai Zhao
Yeyun Gong
Nan Duan
Tim Baldwin
ALMELM
424
394
0
15 Jun 2023
1