ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.08621
  4. Cited By
Learning and Evaluating Sparse Interpretable Sentence Embeddings
v1v2 (latest)

Learning and Evaluating Sparse Interpretable Sentence Embeddings

23 September 2018
V. Trifonov
O. Ganea
Anna Potapenko
Thomas Hofmann
ArXiv (abs)PDFHTML

Papers citing "Learning and Evaluating Sparse Interpretable Sentence Embeddings"

7 / 7 papers shown
Disentangling Dense Embeddings with Sparse Autoencoders
Disentangling Dense Embeddings with Sparse Autoencoders
Charles OÑeill
Christine Ye
K. Iyer
John F. Wu
301
12
0
01 Aug 2024
Answer is All You Need: Instruction-following Text Embedding via
  Answering the Question
Answer is All You Need: Instruction-following Text Embedding via Answering the Question
Letian Peng
Yuwei Zhang
Zilong Wang
Jayanth Srinivasa
Gaowen Liu
Zihan Wang
Jingbo Shang
198
17
0
15 Feb 2024
Practice with Graph-based ANN Algorithms on Sparse Data: Chi-square
  Two-tower model, HNSW, Sign Cauchy Projections
Practice with Graph-based ANN Algorithms on Sparse Data: Chi-square Two-tower model, HNSW, Sign Cauchy Projections
Ping Li
Weijie Zhao
Chao Wang
Qi Xia
Alice Wu
Lijun Peng
148
4
0
13 Jun 2023
Explaining the Deep Natural Language Processing by Mining Textual
  Interpretable Features
Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features
F. Ventura
Salvatore Greco
D. Apiletti
Tania Cerquitelli
108
2
0
12 Jun 2021
Learning Sparse Sentence Encoding without Supervision: An Exploration of
  Sparsity in Variational Autoencoders
Learning Sparse Sentence Encoding without Supervision: An Exploration of Sparsity in Variational Autoencoders
Victor Prokhorov
Yingzhen Li
Ehsan Shareghi
Nigel Collier
SSLDRL
117
1
0
25 Sep 2020
The Explanation Game: Towards Prediction Explainability through Sparse
  Communication
The Explanation Game: Towards Prediction Explainability through Sparse Communication
Marcos Vinícius Treviso
André F. T. Martins
FAtt
182
3
0
28 Apr 2020
Analyzing and Interpreting Neural Networks for NLP: A Report on the
  First BlackboxNLP Workshop
Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop
Afra Alishahi
Grzegorz Chrupała
Tal Linzen
NAIMILM
203
67
0
05 Apr 2019
1