ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.01068
  4. Cited By
Compressing Word Embeddings via Deep Compositional Code Learning

Compressing Word Embeddings via Deep Compositional Code Learning

3 November 2017
Raphael Shu
Hideki Nakayama
ArXivPDFHTML

Papers citing "Compressing Word Embeddings via Deep Compositional Code Learning"

14 / 14 papers shown
Title
Parameter-Efficient Transformer Embeddings
Parameter-Efficient Transformer Embeddings
Henry Ndubuaku
Mouad Talhi
24
0
0
04 May 2025
MultiTok: Variable-Length Tokenization for Efficient LLMs Adapted from LZW Compression
MultiTok: Variable-Length Tokenization for Efficient LLMs Adapted from LZW Compression
Noel Elias
H. Esfahanizadeh
Kaan Kale
S. Vishwanath
Muriel Médard
33
0
0
28 Oct 2024
Rediscovering Hashed Random Projections for Efficient Quantization of
  Contextualized Sentence Embeddings
Rediscovering Hashed Random Projections for Efficient Quantization of Contextualized Sentence Embeddings
Ulf A. Hamster
Ji-Ung Lee
Alexander Geyken
Iryna Gurevych
18
0
0
13 Mar 2023
Embedding Compression for Text Classification Using Dictionary Screening
Embedding Compression for Text Classification Using Dictionary Screening
Jing Zhou
Xinru Jing
Mu Liu
Hansheng Wang
19
0
0
23 Nov 2022
Toward Compact Parameter Representations for Architecture-Agnostic
  Neural Network Compression
Toward Compact Parameter Representations for Architecture-Agnostic Neural Network Compression
Yuezhou Sun
Wenlong Zhao
Lijun Zhang
Xiao Liu
Hui Guan
Matei A. Zaharia
21
0
0
19 Nov 2021
A Review of the Gumbel-max Trick and its Extensions for Discrete
  Stochasticity in Machine Learning
A Review of the Gumbel-max Trick and its Extensions for Discrete Stochasticity in Machine Learning
Iris A. M. Huijben
W. Kool
Max B. Paulus
Ruud J. G. van Sloun
26
93
0
04 Oct 2021
Unsupervised Domain-adaptive Hash for Networks
Unsupervised Domain-adaptive Hash for Networks
Tao He
Lianli Gao
Jingkuan Song
Yuan-Fang Li
18
1
0
20 Aug 2021
METEOR: Learning Memory and Time Efficient Representations from
  Multi-modal Data Streams
METEOR: Learning Memory and Time Efficient Representations from Multi-modal Data Streams
Amila Silva
S. Karunasekera
C. Leckie
Ling Luo
AI4TS
14
2
0
23 Jul 2020
Embedding Compression with Isotropic Iterative Quantization
Embedding Compression with Isotropic Iterative Quantization
Siyu Liao
Jie Chen
Yanzhi Wang
Qinru Qiu
Bo Yuan
MQ
18
11
0
11 Jan 2020
Deconstructing and reconstructing word embedding algorithms
Deconstructing and reconstructing word embedding algorithms
Edward Newell
Kian Kenyon-Dean
Jackie C.K. Cheung
31
4
0
29 Nov 2019
Latent Multi-Criteria Ratings for Recommendations
Latent Multi-Criteria Ratings for Recommendations
Pan Li
Alexander Tuzhilin
15
23
0
26 Jun 2019
Learning Compressed Sentence Representations for On-Device Text
  Processing
Learning Compressed Sentence Representations for On-Device Text Processing
Dinghan Shen
Pengyu Cheng
Dhanasekar Sundararaman
Xinyuan Zhang
Qian Yang
Meng Tang
Asli Celikyilmaz
Lawrence Carin
16
22
0
19 Jun 2019
Compositional Coding Capsule Network with K-Means Routing for Text
  Classification
Compositional Coding Capsule Network with K-Means Routing for Text Classification
Hao Ren
Hong-wei Lu
14
53
0
22 Oct 2018
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
311
1,047
0
10 Feb 2017
1