ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.05018
  4. Cited By
Comparatives, Quantifiers, Proportions: A Multi-Task Model for the
  Learning of Quantities from Vision

Comparatives, Quantifiers, Proportions: A Multi-Task Model for the Learning of Quantities from Vision

13 April 2018
Sandro Pezzelle
Ionut-Teodor Sorodoc
Raffaella Bernardi
ArXiv (abs)PDFHTML

Papers citing "Comparatives, Quantifiers, Proportions: A Multi-Task Model for the Learning of Quantities from Vision"

4 / 4 papers shown
Pragmatic Reasoning Unlocks Quantifier Semantics for Foundation Models
Pragmatic Reasoning Unlocks Quantifier Semantics for Foundation Models
Yiyuan Li
Rakesh R Menon
Sayan Ghosh
Shashank Srivastava
LRM
178
2
0
08 Nov 2023
Is the Red Square Big? MALeViC: Modeling Adjectives Leveraging Visual
  Contexts
Is the Red Square Big? MALeViC: Modeling Adjectives Leveraging Visual ContextsConference on Empirical Methods in Natural Language Processing (EMNLP), 2019
Sandro Pezzelle
Raquel Fernández
VLM
118
18
0
27 Aug 2019
Multimodal Logical Inference System for Visual-Textual Entailment
Multimodal Logical Inference System for Visual-Textual EntailmentAnnual Meeting of the Association for Computational Linguistics (ACL), 2019
Riko Suzuki
Hitomi Yanaka
Masashi Yoshikawa
K. Mineshima
D. Bekki
NAI
165
21
0
10 Jun 2019
The meaning of "most" for visual question answering models
The meaning of "most" for visual question answering models
A. Kuhnle
Ann A. Copestake
134
4
0
31 Dec 2018
1