ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.00055
  4. Cited By
Sentence Bottleneck Autoencoders from Transformer Language Models

Sentence Bottleneck Autoencoders from Transformer Language Models

31 August 2021
Ivan Montero
Nikolaos Pappas
Noah A. Smith
    AI4CE
ArXivPDFHTML

Papers citing "Sentence Bottleneck Autoencoders from Transformer Language Models"

18 / 18 papers shown
Title
Text Compression for Efficient Language Generation
David Gu
Peter Belcak
Roger Wattenhofer
52
0
0
14 Mar 2025
Set-Theoretic Compositionality of Sentence Embeddings
Set-Theoretic Compositionality of Sentence Embeddings
Naman Bansal
Yash Mahajan
Sanjeev Kumar Sinha
S. Karmaker
CoGe
72
0
0
28 Feb 2025
Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the Limits of Embedding Space Capacity
Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the Limits of Embedding Space Capacity
Yuri Kuratov
M. Arkhipov
Aydar Bulatov
Mikhail Burtsev
85
0
0
18 Feb 2025
Data-efficient Performance Modeling via Pre-training
Data-efficient Performance Modeling via Pre-training
Chunting Liu
Riyadh Baghdadi
41
0
0
24 Jan 2025
Semformer: Transformer Language Models with Semantic Planning
Semformer: Transformer Language Models with Semantic Planning
Yongjing Yin
Junran Ding
Kai Song
Yue Zhang
37
4
0
17 Sep 2024
Time Series Anomaly Detection using Diffusion-based Models
Time Series Anomaly Detection using Diffusion-based Models
Ioana Pintilie
Andrei Manolache
Florin Brad
DiffM
15
14
0
02 Nov 2023
SALSA: Semantically-Aware Latent Space Autoencoder
SALSA: Semantically-Aware Latent Space Autoencoder
Kathryn E. Kirchoff
Travis Maxfield
Alexander Tropsha
Shawn M. Gomez
8
2
0
04 Oct 2023
Towards Controllable Natural Language Inference through Lexical
  Inference Types
Towards Controllable Natural Language Inference through Lexical Inference Types
Yingji Zhang
Danilo S. Carvalho
Ian Pratt-Hartmann
André Freitas
16
1
0
07 Aug 2023
Neuro-Symbolic Execution of Generic Source Code
Neuro-Symbolic Execution of Generic Source Code
Yaojie Hu
Jin Tian
NAI
22
0
0
23 Mar 2023
Conversation Style Transfer using Few-Shot Learning
Conversation Style Transfer using Few-Shot Learning
Shamik Roy
Raphael Shu
Nikolaos Pappas
Elman Mansimov
Yi Zhang
Saab Mansour
Dan Roth
17
8
0
16 Feb 2023
Language Model Pre-Training with Sparse Latent Typing
Language Model Pre-Training with Sparse Latent Typing
Liliang Ren
Zixuan Zhang
H. Wang
Clare R. Voss
Chengxiang Zhai
Heng Ji
40
3
0
23 Oct 2022
Decoding a Neural Retriever's Latent Space for Query Suggestion
Decoding a Neural Retriever's Latent Space for Query Suggestion
Leonard Adolphs
Michelle Chen Huebscher
Christian Buck
Sertan Girgin
Olivier Bachem
Massimiliano Ciaramita
Thomas Hofmann
RALM
13
8
0
21 Oct 2022
vec2text with Round-Trip Translations
vec2text with Round-Trip Translations
Geoffrey Cideron
Sertan Girgin
Anton Raichuk
Olivier Pietquin
Olivier Bachem
Léonard Hussenot
40
3
0
14 Sep 2022
E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language
  Understanding and Generation
E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language Understanding and Generation
Qihuang Zhong
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
29
27
0
30 May 2022
Extracting Latent Steering Vectors from Pretrained Language Models
Extracting Latent Steering Vectors from Pretrained Language Models
Nishant Subramani
Nivedita Suresh
Matthew E. Peters
LLMSV
20
80
0
10 May 2022
EncT5: A Framework for Fine-tuning T5 as Non-autoregressive Models
EncT5: A Framework for Fine-tuning T5 as Non-autoregressive Models
Frederick Liu
T. Huang
Shihang Lyu
Siamak Shakeri
Hongkun Yu
Jing Li
31
8
0
16 Oct 2021
Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation
Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation
Florian Mai
James Henderson
8
2
0
13 Oct 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,950
0
20 Apr 2018
1