ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.12399
17
1

Self-Enhancing Multi-filter Sequence-to-Sequence Model

25 September 2021
Yunhao Yang
Zhaokun Xue
Andrew Whinston
ArXivPDFHTML
Abstract

Representation learning is important for solving sequence-to-sequence problems in natural language processing. Representation learning transforms raw data into vector-form representations while preserving their features. However, data with significantly different features leads to heterogeneity in their representations, which may increase the difficulty of convergence. We design a multi-filter encoder-decoder model to resolve the heterogeneity problem in sequence-to-sequence tasks. The multi-filter model divides the latent space into subspaces using a clustering algorithm and trains a set of decoders (filters) in which each decoder only concentrates on the features from its corresponding subspace. As for the main contribution, we design a self-enhancing mechanism that uses a reinforcement learning algorithm to optimize the clustering algorithm without additional training data. We run semantic parsing and machine translation experiments to indicate that the proposed model can outperform most benchmarks by at least 5\%. We also empirically show the self-enhancing mechanism can improve performance by over 10\% and provide evidence to demonstrate the positive correlation between the model's performance and the latent space clustering.

View on arXiv
Comments on this paper