ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1511.07067
  4. Cited By
Visual Word2Vec (vis-w2v): Learning Visually Grounded Word Embeddings
  Using Abstract Scenes

Visual Word2Vec (vis-w2v): Learning Visually Grounded Word Embeddings Using Abstract Scenes

22 November 2015
Satwik Kottur
Ramakrishna Vedantam
José M. F. Moura
Devi Parikh
    VLM
ArXivPDFHTML

Papers citing "Visual Word2Vec (vis-w2v): Learning Visually Grounded Word Embeddings Using Abstract Scenes"

15 / 15 papers shown
Title
GOAL: Global-local Object Alignment Learning
GOAL: Global-local Object Alignment Learning
Hyungyu Choi
Young Kyun Jang
Chanho Eom
VLM
216
0
0
22 Mar 2025
Kiki or Bouba? Sound Symbolism in Vision-and-Language Models
Kiki or Bouba? Sound Symbolism in Vision-and-Language Models
Morris Alper
Hadar Averbuch-Elor
48
10
0
25 Oct 2023
Learning Zero-Shot Multifaceted Visually Grounded Word Embeddings via
  Multi-Task Training
Learning Zero-Shot Multifaceted Visually Grounded Word Embeddings via Multi-Task Training
Hassan Shahmohammadi
Hendrik P. A. Lensch
R. Baayen
22
19
0
15 Apr 2021
COOT: Cooperative Hierarchical Transformer for Video-Text Representation
  Learning
COOT: Cooperative Hierarchical Transformer for Video-Text Representation Learning
Simon Ging
Mohammadreza Zolfaghari
Hamed Pirsiavash
Thomas Brox
ViT
CLIP
31
169
0
01 Nov 2020
Personalizing Fast-Forward Videos Based on Visual and Textual Features
  from Social Network
Personalizing Fast-Forward Videos Based on Visual and Textual Features from Social Network
W. Ramos
M. Silva
Edson Roteia Araujo Junior
Alan C. Neves
Erickson R. Nascimento
22
6
0
29 Dec 2019
MULE: Multimodal Universal Language Embedding
MULE: Multimodal Universal Language Embedding
Donghyun Kim
Kuniaki Saito
Kate Saenko
Stan Sclaroff
Bryan A. Plummer
VLM
32
40
0
08 Sep 2019
Word2vec to behavior: morphology facilitates the grounding of language
  in machines
Word2vec to behavior: morphology facilitates the grounding of language in machines
David Matthews
Sam Kriegman
C. Cappelle
Josh Bongard
LM&Ro
19
6
0
03 Aug 2019
Wasserstein Barycenter Model Ensembling
Wasserstein Barycenter Model Ensembling
Pierre Dognin
Igor Melnyk
Youssef Mroueh
Jerret Ross
Cicero Nogueira dos Santos
Tom Sercu
30
24
0
13 Feb 2019
Don't only Feel Read: Using Scene text to understand advertisements
Don't only Feel Read: Using Scene text to understand advertisements
Arka Ujjal Dey
Suman K. Ghosh
Ernest Valveny
DiffM
18
4
0
21 Jun 2018
Learning from Noisy Web Data with Category-level Supervision
Learning from Noisy Web Data with Category-level Supervision
Li Niu
Qingtao Tang
Ashok Veeraraghavan
A. Sabharwal
NoLa
30
32
0
10 Mar 2018
Learning Multi-Modal Word Representation Grounded in Visual Context
Learning Multi-Modal Word Representation Grounded in Visual Context
Éloi Zablocki
Benjamin Piwowarski
Laure Soulier
Patrick Gallinari
SSL
12
30
0
09 Nov 2017
Learning Robust Visual-Semantic Embeddings
Learning Robust Visual-Semantic Embeddings
Yao-Hung Hubert Tsai
Liang-Kang Huang
Ruslan Salakhutdinov
SSL
AI4TS
27
166
0
17 Mar 2017
Sound-Word2Vec: Learning Word Representations Grounded in Sounds
Sound-Word2Vec: Learning Word Representations Grounded in Sounds
Ashwin K. Vijayakumar
Ramakrishna Vedantam
Devi Parikh
38
22
0
06 Mar 2017
Multilingual Visual Sentiment Concept Matching
Multilingual Visual Sentiment Concept Matching
Nikolaos Pappas
Miriam Redi
Mercan Topkara
Brendan Jou
Hongyi Liu
Tao Chen
Shih-Fu Chang
CVBM
24
14
0
07 Jun 2016
We Are Humor Beings: Understanding and Predicting Visual Humor
We Are Humor Beings: Understanding and Predicting Visual Humor
Arjun Chandrasekaran
Ashwin K. Vijayakumar
Stanislaw Antol
Joey Tianyi Zhou
Dhruv Batra
C. L. Zitnick
Devi Parikh
20
56
0
14 Dec 2015
1