ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.11445
  4. Cited By
Induced Natural Language Rationales and Interleaved Markup Tokens Enable
  Extrapolation in Large Language Models

Induced Natural Language Rationales and Interleaved Markup Tokens Enable Extrapolation in Large Language Models

24 August 2022
M. Bueno
Carlos Gemmel
Jeffrey Stephen Dalton
R. Lotufo
Rodrigo Nogueira
    LRM
ArXivPDFHTML

Papers citing "Induced Natural Language Rationales and Interleaved Markup Tokens Enable Extrapolation in Large Language Models"

6 / 6 papers shown
Title
Faith and Fate: Limits of Transformers on Compositionality
Faith and Fate: Limits of Transformers on Compositionality
Nouha Dziri
Ximing Lu
Melanie Sclar
Xiang Lorraine Li
Liwei Jian
...
Sean Welleck
Xiang Ren
Allyson Ettinger
Zaïd Harchaoui
Yejin Choi
ReLM
LRM
30
329
0
29 May 2023
Generate, Transform, Answer: Question Specific Tool Synthesis for
  Tabular Data
Generate, Transform, Answer: Question Specific Tool Synthesis for Tabular Data
Carlos Gemmell
Jeffrey Stephen Dalton
LMTD
30
13
0
17 Mar 2023
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
314
3,248
0
21 Mar 2022
Learning to Generalize Compositionally by Transferring Across Semantic
  Parsing Tasks
Learning to Generalize Compositionally by Transferring Across Semantic Parsing Tasks
Wang Zhu
Peter Shaw
Tal Linzen
Fei Sha
35
7
0
09 Nov 2021
Investigating Numeracy Learning Ability of a Text-to-Text Transfer Model
Investigating Numeracy Learning Ability of a Text-to-Text Transfer Model
Kuntal Kumar Pal
Chitta Baral
105
18
0
10 Sep 2021
Train Short, Test Long: Attention with Linear Biases Enables Input
  Length Extrapolation
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
Ofir Press
Noah A. Smith
M. Lewis
253
695
0
27 Aug 2021
1