ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.03474
  4. Cited By
Revisiting Simple Neural Probabilistic Language Models

Revisiting Simple Neural Probabilistic Language Models

North American Chapter of the Association for Computational Linguistics (NAACL), 2021
8 April 2021
Simeng Sun
Mohit Iyyer
ArXiv (abs)PDFHTML

Papers citing "Revisiting Simple Neural Probabilistic Language Models"

10 / 10 papers shown
Can Transformers Learn $n$-gram Language Models?
Can Transformers Learn nnn-gram Language Models?Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024
Anej Svete
Nadav Borenstein
M. Zhou
Isabelle Augenstein
Robert Bamler
317
14
0
03 Oct 2024
Synthetic4Health: Generating Annotated Synthetic Clinical Letters
Synthetic4Health: Generating Annotated Synthetic Clinical LettersFrontiers in Digital Health (Front. Digit. Health), 2024
Libo Ren
Samuel Belkadi
Lifeng Han
Warren Del-Pinto
Goran Nenadic
SyDa
226
6
0
14 Sep 2024
Sampling-based Pseudo-Likelihood for Membership Inference Attacks
Sampling-based Pseudo-Likelihood for Membership Inference Attacks
Masahiro Kaneko
Youmi Ma
Yuki Wata
Naoaki Okazaki
401
25
0
17 Apr 2024
The Role of $n$-gram Smoothing in the Age of Neural Networks
The Role of nnn-gram Smoothing in the Age of Neural Networks
Luca Malagutti
Andrius Buinovskij
Anej Svete
Clara Meister
Afra Amini
Robert Bamler
293
7
0
25 Mar 2024
Memory-efficient Stochastic methods for Memory-based Transformers
Memory-efficient Stochastic methods for Memory-based Transformers
Vishwajit Kumar Vishnu
C. Sekhar
159
0
0
14 Nov 2023
TorchDEQ: A Library for Deep Equilibrium Models
TorchDEQ: A Library for Deep Equilibrium Models
Zhengyang Geng
J. Zico Kolter
VLM
461
22
0
28 Oct 2023
On "Scientific Debt" in NLP: A Case for More Rigour in Language Model
  Pre-Training Research
On "Scientific Debt" in NLP: A Case for More Rigour in Language Model Pre-Training ResearchAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Made Nindyatama Nityasya
Haryo Akbarianto Wibowo
Alham Fikri Aji
Genta Indra Winata
Radityo Eko Prasojo
Phil Blunsom
A. Kuncoro
242
9
0
05 Jun 2023
Are Neighbors Enough? Multi-Head Neural n-gram can be Alternative to
  Self-attention
Are Neighbors Enough? Multi-Head Neural n-gram can be Alternative to Self-attention
Mengsay Loem
Sho Takase
Masahiro Kaneko
Naoaki Okazaki
229
2
0
27 Jul 2022
N-Grammer: Augmenting Transformers with latent n-grams
N-Grammer: Augmenting Transformers with latent n-grams
Aurko Roy
Rohan Anil
Guangda Lai
Benjamin Lee
Jeffrey Zhao
...
Yu
Phuong Dao
Christopher Fifty
Zhiwen Chen
Yonghui Wu
216
10
0
13 Jul 2022
Revisiting Deep Learning Models for Tabular Data
Revisiting Deep Learning Models for Tabular DataNeural Information Processing Systems (NeurIPS), 2021
Yu. V. Gorishniy
Ivan Rubachev
Valentin Khrulkov
Artem Babenko
LMTD
615
1,238
0
22 Jun 2021
1
Page 1 of 1