ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.06331
10
13
v1v2 (latest)

Multi-modal Embedding Fusion-based Recommender

13 May 2020
Anna Wróblewska
Jacek Dąbrowski
Michał Pastuszak
Andrzej Michalowski
Michal Daniluk
Barbara Rychalska
Mikolaj Wieczorek
Sylwia Sysko-Romanczuk Synerise
ArXiv (abs)PDFHTML
Abstract

Recommendation systems have lately been popularized globally, with primary use cases in online interaction systems, with significant focus on e-commerce platforms. We have developed a machine learning-based recommendation platform, which can be easily applied to almost any items and/or actions domain. Contrary to existing recommendation systems, our platform supports multiple types of interaction data with multiple modalities of metadata natively. This is achieved through multi-modal fusion of various data representations. We deployed the platform into multiple e-commerce stores of different kinds, e.g. food and beverages, shoes, fashion items, telecom operators. Here, we present our system, its flexibility and performance. We also show benchmark results on open datasets, that significantly outperform state-of-the-art prior work.

View on arXiv
Comments on this paper