Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1511.06909
Cited By
BlackOut: Speeding up Recurrent Neural Network Language Models With Very Large Vocabularies
21 November 2015
Shihao Ji
S.V.N. Vishwanathan
N. Satish
Michael J. Anderson
Pradeep Dubey
Re-assign community
ArXiv
PDF
HTML
Papers citing
"BlackOut: Speeding up Recurrent Neural Network Language Models With Very Large Vocabularies"
8 / 8 papers shown
Title
A Deep Learning-based Radar and Camera Sensor Fusion Architecture for Object Detection
Felix Nobis
Maximilian Geisslinger
Markus Weber
Johannes Betz
Markus Lienkamp
22
257
0
15 May 2020
Von Mises-Fisher Loss for Training Sequence to Sequence Models with Continuous Outputs
Sachin Kumar
Yulia Tsvetkov
22
70
0
10 Dec 2018
Conditional Noise-Contrastive Estimation of Unnormalised Models
Ciwan Ceylan
Michael U. Gutmann
14
41
0
10 Jun 2018
Online normalizer calculation for softmax
Maxim Milakov
N. Gimelshein
14
84
0
08 May 2018
Language Modeling with Gated Convolutional Networks
Yann N. Dauphin
Angela Fan
Michael Auli
David Grangier
50
2,360
0
23 Dec 2016
Efficient softmax approximation for GPUs
Edouard Grave
Armand Joulin
Moustapha Cissé
David Grangier
Hervé Jégou
25
270
0
14 Sep 2016
The Z-loss: a shift and scale invariant classification loss belonging to the Spherical Family
A. D. Brébisson
Pascal Vincent
17
10
0
29 Apr 2016
Tree-to-Sequence Attentional Neural Machine Translation
Akiko Eriguchi
Kazuma Hashimoto
Yoshimasa Tsuruoka
AIMat
27
269
0
19 Mar 2016
1