ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1808.10696
  4. Cited By
How agents see things: On visual representations in an emergent language
  game

How agents see things: On visual representations in an emergent language game

31 August 2018
Diane Bouchacourt
Marco Baroni
ArXivPDFHTML

Papers citing "How agents see things: On visual representations in an emergent language game"

12 / 12 papers shown
Title
A Review of the Applications of Deep Learning-Based Emergent
  Communication
A Review of the Applications of Deep Learning-Based Emergent Communication
Brendon Boldt
David R. Mortensen
VLM
27
6
0
03 Jul 2024
Models of symbol emergence in communication: a conceptual review and a
  guide for avoiding local minima
Models of symbol emergence in communication: a conceptual review and a guide for avoiding local minima
Julian Zubek
Tomasz Korbak
J. Rączaszek-Leonardi
28
2
0
08 Mar 2023
Trust-based Consensus in Multi-Agent Reinforcement Learning Systems
Trust-based Consensus in Multi-Agent Reinforcement Learning Systems
Ho Long Fung
Victor-Alexandru Darvariu
Stephen Hailes
Mirco Musolesi
25
5
0
25 May 2022
Emergent Graphical Conventions in a Visual Communication Game
Emergent Graphical Conventions in a Visual Communication Game
Shuwen Qiu
Sirui Xie
Lifeng Fan
Tao Gao
Jungseock Joo
Song-Chun Zhu
Yixin Zhu
29
12
0
28 Nov 2021
Catalytic Role Of Noise And Necessity Of Inductive Biases In The
  Emergence Of Compositional Communication
Catalytic Role Of Noise And Necessity Of Inductive Biases In The Emergence Of Compositional Communication
Lukasz Kuciñski
Tomasz Korbak
P. Kołodziej
Piotr Milo's
55
19
0
11 Nov 2021
Shared Visual Representations of Drawing for Communication: How do
  different biases affect human interpretability and intent?
Shared Visual Representations of Drawing for Communication: How do different biases affect human interpretability and intent?
Daniela Mihai
Jonathon S. Hare
FAtt
16
0
0
15 Oct 2021
Few-shot Language Coordination by Modeling Theory of Mind
Few-shot Language Coordination by Modeling Theory of Mind
Hao Zhu
Graham Neubig
Yonatan Bisk
17
35
0
12 Jul 2021
Quasi-Equivalence Discovery for Zero-Shot Emergent Communication
Quasi-Equivalence Discovery for Zero-Shot Emergent Communication
Kalesha Bullard
Douwe Kiela
Franziska Meier
Joelle Pineau
Jakob N. Foerster
25
20
0
14 Mar 2021
Emergent Communication Pretraining for Few-Shot Machine Translation
Emergent Communication Pretraining for Few-Shot Machine Translation
Yaoyiran Li
E. Ponti
Ivan Vulić
Anna Korhonen
23
19
0
02 Nov 2020
Emergent Multi-Agent Communication in the Deep Learning Era
Emergent Multi-Agent Communication in the Deep Learning Era
Angeliki Lazaridou
Marco Baroni
AI4CE
28
196
0
03 Jun 2020
Multi-agent Communication meets Natural Language: Synergies between
  Functional and Structural Language Learning
Multi-agent Communication meets Natural Language: Synergies between Functional and Structural Language Learning
Angeliki Lazaridou
Anna Potapenko
O. Tieleman
LLMAG
19
96
0
14 May 2020
Evaluating the Representational Hub of Language and Vision Models
Evaluating the Representational Hub of Language and Vision Models
Ravi Shekhar
Ece Takmaz
Raquel Fernández
Raffaella Bernardi
25
11
0
12 Apr 2019
1