ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.04541
  4. Cited By
The Inductive Bias of In-Context Learning: Rethinking Pretraining
  Example Design

The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design

9 October 2021
Yoav Levine
Noam Wies
Daniel Jannai
D. Navon
Yedid Hoshen
Amnon Shashua
    AI4CE
ArXivPDFHTML

Papers citing "The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design"

11 / 11 papers shown
Title
Trillion 7B Technical Report
Trillion 7B Technical Report
Sungjun Han
Juyoung Suk
Suyeong An
Hyungguk Kim
Kyuseok Kim
Wonsuk Yang
Seungtaek Choi
Jamin Shin
87
0
0
21 Apr 2025
HierPromptLM: A Pure PLM-based Framework for Representation Learning on Heterogeneous Text-rich Networks
HierPromptLM: A Pure PLM-based Framework for Representation Learning on Heterogeneous Text-rich Networks
Q. Zhu
Liang Zhang
Qianxiong Xu
Cheng Long
120
1
0
22 Jan 2025
Analysing The Impact of Sequence Composition on Language Model
  Pre-Training
Analysing The Impact of Sequence Composition on Language Model Pre-Training
Yu Zhao
Yuanbin Qu
Konrad Staniszewski
Szymon Tworkowski
Wei Liu
Piotr Milo's
Yuxiang Wu
Pasquale Minervini
34
14
0
21 Feb 2024
Multiple Thinking Achieving Meta-Ability Decoupling for Object
  Navigation
Multiple Thinking Achieving Meta-Ability Decoupling for Object Navigation
Ronghao Dang
Lu Chen
Liuyi Wang
Zongtao He
Chengju Liu
Qi Chen
LRM
19
8
0
03 Feb 2023
On the Ability of Graph Neural Networks to Model Interactions Between
  Vertices
On the Ability of Graph Neural Networks to Model Interactions Between Vertices
Noam Razin
Tom Verbin
Nadav Cohen
19
10
0
29 Nov 2022
Transformer Vs. MLP-Mixer: Exponential Expressive Gap For NLP Problems
Transformer Vs. MLP-Mixer: Exponential Expressive Gap For NLP Problems
D. Navon
A. Bronstein
MoE
36
0
0
17 Aug 2022
LinkBERT: Pretraining Language Models with Document Links
LinkBERT: Pretraining Language Models with Document Links
Michihiro Yasunaga
J. Leskovec
Percy Liang
KELM
12
352
0
29 Mar 2022
Implicit Regularization in Hierarchical Tensor Factorization and Deep
  Convolutional Neural Networks
Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks
Noam Razin
Asaf Maman
Nadav Cohen
33
29
0
27 Jan 2022
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,916
0
31 Dec 2020
Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for
  Pairwise Sentence Scoring Tasks
Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks
Nandan Thakur
Nils Reimers
Johannes Daxenberger
Iryna Gurevych
200
241
0
16 Oct 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,950
0
20 Apr 2018
1