ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.05028
  4. Cited By
Revisiting Large Language Models as Zero-shot Relation Extractors

Revisiting Large Language Models as Zero-shot Relation Extractors

8 October 2023
Guozheng Li
Peng Wang
Wenjun Ke
    KELM
    LRM
    ReLM
ArXivPDFHTML

Papers citing "Revisiting Large Language Models as Zero-shot Relation Extractors"

14 / 14 papers shown
Title
Meta In-Context Learning Makes Large Language Models Better Zero and
  Few-Shot Relation Extractors
Meta In-Context Learning Makes Large Language Models Better Zero and Few-Shot Relation Extractors
Guozheng Li
Peng Wang
Jiajun Liu
Yikai Guo
Ke Ji
Ziyu Shang
Zijie Xu
LRM
28
7
0
27 Apr 2024
UrbanKGent: A Unified Large Language Model Agent Framework for Urban
  Knowledge Graph Construction
UrbanKGent: A Unified Large Language Model Agent Framework for Urban Knowledge Graph Construction
Yansong NING
Hao Liu
LLMAG
21
2
0
10 Feb 2024
Large Language Model Is Not a Good Few-shot Information Extractor, but a
  Good Reranker for Hard Samples!
Large Language Model Is Not a Good Few-shot Information Extractor, but a Good Reranker for Hard Samples!
Yubo Ma
Yixin Cao
YongChing Hong
Aixin Sun
RALM
74
85
0
15 Mar 2023
Ask Me Anything: A simple strategy for prompting language models
Ask Me Anything: A simple strategy for prompting language models
Simran Arora
A. Narayan
Mayee F. Chen
Laurel J. Orr
Neel Guha
Kush S. Bhatia
Ines Chami
Frederic Sala
Christopher Ré
ReLM
LRM
197
160
0
05 Oct 2022
Large Language Models are Few-Shot Clinical Information Extractors
Large Language Models are Few-Shot Clinical Information Extractors
Monica Agrawal
S. Hegselmann
Hunter Lang
Yoon Kim
David Sontag
BDL
LM&MA
152
327
0
25 May 2022
Large Language Models are Zero-Shot Reasoners
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
291
2,712
0
24 May 2022
DeepStruct: Pretraining of Language Models for Structure Prediction
DeepStruct: Pretraining of Language Models for Structure Prediction
Chenguang Wang
Xiao Liu
Zui Chen
Haoyun Hong
Jie Tang
Dawn Song
209
66
0
21 May 2022
Summarization as Indirect Supervision for Relation Extraction
Summarization as Indirect Supervision for Relation Extraction
K. Lu
I-Hung Hsu
Wenxuan Zhou
Mingyu Derek Ma
Muhao Chen
61
51
0
19 May 2022
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
297
3,163
0
21 Mar 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
315
8,261
0
28 Jan 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
203
1,651
0
15 Oct 2021
What Makes Good In-Context Examples for GPT-$3$?
What Makes Good In-Context Examples for GPT-333?
Jiachang Liu
Dinghan Shen
Yizhe Zhang
Bill Dolan
Lawrence Carin
Weizhu Chen
AAML
RALM
275
1,296
0
17 Jan 2021
Text Summarization with Pretrained Encoders
Text Summarization with Pretrained Encoders
Yang Liu
Mirella Lapata
MILM
245
1,417
0
22 Aug 2019
1