ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1411.4116
73
8

Investigating the Role of Prior Disambiguation in Deep-learning Compositional Models of Meaning

15 November 2014
Jianpeng Cheng
Dimitri Kartsaklis
Edward Grefenstette
    CoGe
ArXiv (abs)PDFHTML
Abstract

This paper aims to explore the effect of prior disambiguation on neural network- based compositional models, with the hope that better semantic representations for text compounds can be produced. We disambiguate the input word vectors before they are fed into a compositional deep net. A series of evaluations shows the positive effect of prior disambiguation for such deep models.

View on arXiv
Comments on this paper